name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
261998 | Practical Algorithms for Selection on Coarse-Grained Parallel Computers. | AbstractIn this paper, we consider the problem of selection on coarse-grained distributed memory parallel computers. We discuss several deterministic and randomized algorithms for parallel selection. We also consider several algorithms for load balancing needed to keep a balanced distribution of data across processors during the execution of the selection algorithms. We have carried out detailed implementations of all the algorithms discussed on the CM-5 and report on the experimental results. The results clearly demonstrate the role of randomization in reducing communication overhead. | better in practice than its deterministic counterpart due to the low constant associated with the
algorithm.
Parallel selection algorithms are useful in such practical applications as dynamic distribution
of multidimensional data sets, parallel graph partitioning and parallel construction of multidimensional
binary search trees. Many parallel algorithms for selection have been designed for the PRAM
model [2, 3, 4, 9, 14] and for various network models including trees, meshes, hypercubes and re-configurable
architectures [6, 7, 13, 16, 22]. More recently, Bader et.al. [5] implement a parallel
deterministic selection algorithm on several distributed memory machines including CM-5, IBM
SP-2 and INTEL Paragon. In this paper, we consider and evaluate parallel selection algorithms
for coarse-grained distributed memory parallel computers. A coarse-grained parallel computer consists
of several relatively powerful processors connected by an interconnection network. Most of
the commercially available parallel computers belong to this category. Examples of such machines
include CM-5, IBM SP-1 and SP-2, nCUBE 2, INTEL Paragon and Cray T3D.
The rest of the paper is organized as follows: In Section 2, we describe our model of parallel
computation and outline some primitives used by the algorithms. In Section 3, we present two deterministic
and two randomized algorithms for parallel selection. Selection algorithms are iterative
and work by reducing the number of elements to consider from iteration to iteration. Since we can
not guarantee that the same number of elements are removed on every processor, this leads to load
imbalance. In Section 4, we present several algorithms to perform such a load balancing. Each of
the load balancing algorithms can be used by any selection algorithm that requires load balancing.
In Section 5, we report and analyze the results we have obtained on the CM-5 by detailed implementation
of the selection and load balancing algorithms presented. In section 6, we analyze the
selection algorithms for meshes and hypercubes. Section 7 discusses parallel weighted selection.
We conclude the paper in Section 8.
Preliminaries
2.1 Model of Parallel Computation
We model a coarse-grained parallel machine as follows: A coarse-grained machine consists of several
relatively powerful processors connected by an interconnection network. Rather than making specific
assumptions about the underlying network, we assume a two-level model of computation. The
two-level model assumes a fixed cost for an off-processor access independent of the distance between
the communicating processors. Communication between processors has a start-up overhead of - ,
while the data transfer rate is 1
- . For our complexity analysis we assume that - and - are constant
and independent of the link congestion and distance between two processors. With new techniques,
such as wormhole routing and randomized routing, the distance between communicating processors
seems to be less of a determining factor on the amount of time needed to complete the communica-
tion. Furthermore, the effect of link contention is eased due to the presence of virtual channels and
the fact that link bandwidth is much higher than the bandwidth of node interface. This permits us
to use the two-level model and view the underlying interconnection network as a virtual crossbar
network connecting the processors. These assumptions closely model the behavior of the CM-5 on
which our experimental results are presented. A discussion on other architectures is presented in
Section 6.
2.2 Parallel Primitives
In the following, we describe some important parallel primitives that are repeatedly used in our
algorithms and implementations. We state the running time required for each of these primitives
under our model of parallel computation. The analysis of the run times for the primitives described
is fairly simple and is omitted in the interest of brevity. The interested reader is referred to [15].
In what follows, p refers to the number of processors.
1. Broadcast
In a Broadcast operation, one processor has an element of data to be broadcast to all other
processors. This operation can be performed in O((-) log p) time.
2. Combine
Given an element of data on each processor and a binary associative and commutative op-
eration, the Combine operation computes the result of combining the elements stored on all
the processors using the operation and stores the result on every processor. This operation
can also be performed in O((-) log p) time.
3. Parallel Prefix
Suppose that x are p data elements with processor P i containing x i .
Let\Omega be a
binary associative operation. The Parallel Prefix operation stores the value of x
on processor P i . This operation can be be performed in O((-) log p) time.
4. Gather
Given an element of data on each processor, the Gather operation collects all the data and
stores it in one of the processors. This can be accomplished in O(- log
5. Global Concatenate
This is same as Gather except that the collected data should be stored on all the processors.
This operation can also be performed in O(- log
6. Transportation Primitive
The transportation primitive performs many-to-many personalized communication with possibly
high variance in message size. If the total length of the messages being sent out or
received at any processor is bounded by t, the time taken for the communication is 2-t (+
lower order terms) when t - O(p p-). If the outgoing and incoming traffic bounds are
r and c instead, the communication takes time c) (+ lower order terms) when either
3 Parallel Algorithms for Selection
Parallel algorithms for selection are also iterative and work by reducing the number of elements
to be considered from iteration to iteration. The elements are distributed across processors and
each iteration is performed in parallel by all the processors. Let n be the number of elements
and p be the number of processors. To begin with, each processor is given d n
Otherwise, this can be easily achieved by using one of the load balancing techniques to be described
in Section 4. Let n (j)
i be the number of elements in processor P i at the beginning of iteration j.
Algorithm 1 Median of Medians selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
Step 1. Use sequential selection to find median m i of list L i [l; r]
2.
Step 3. On P0
Find median of M , say MoM , and broadcast it to all processors.
Step 4. Partition L i into - MoM and ? MoM to give index i , the split index
Step 5. count = Combine(index i , add) calculates the number of elements
Step 6. If (rank - count )
else
Step 7. LoadBalance(L
Step 8.
Step 9. On P0
Perform sequential selection to find element q of rank in L
Figure
1: Median on Medians selection Algorithm
. Let k (j) be the rank of the element we need to identify among these n (j)
elements. We use this notation to describe all the selection algorithms presented in this paper.
3.1 Median of Medians Algorithm
The median of medians algorithm is a straightforward parallelization of the deterministic sequential
algorithm [8] and has recently been suggested and implemented by Bader et. al. [5]. This algorithm
Figure
load balancing at the beginning of each iteration.
At the beginning of iteration j, each processor finds the median of its n (j)
elements using the sequential deterministic algorithm. All such medians are gathered on one pro-
cessor, which then finds the median of these medians. The median of medians is then estimated
to be the median of all the n (j) elements. The estimated median is broadcast to all the processors.
Each processor scans through its set of points and splits them into two subsets containing elements
less than or equal to and greater than the estimated median, respectively. A Combine operation
and a comparison with k (j) determines which of these two subsets to be discarded and the value of
k (j+1) needed for the next iteration.
Selecting the median of medians as the estimated median ensures that the estimated median
will have at least a guaranteed fraction of the number of elements below it and at least a guaranteed
fraction of the elements above it, just as in the sequential algorithm. This ensures that the worst
case number of iterations required by the algorithm is O(log n). Let n (j)
. Thus,
finding the local median and splitting the set of points into two subsets based on the estimated
median each requires O(n (j)
in the j th iteration. The remaining work is one Gather, one
Broadcast and one Combine operation. Therefore, the worst-case running time of this algorithm is
log
p ), the running time is O( n
log n+ - log p log n+
-p log n).
This algorithm requires the use of load balancing between iterations. With load balancing,
. Assuming load balancing and ignoring the cost of load balancing itself, the running
time of the algorithm reduces to P log
3.2 Bucket-Based Algorithm
The bucket-based algorithm [17] attempts to reduce the worst-case running time of the above algorithm
without requiring load balance. This algorithm is shown in Figure 2. First, in order to keep
the algorithm deterministic without a balanced number of elements on each processor, the median
of medians is replaced by the weighted median of medians. As before, local medians are computed
on each processor. However, the estimated median is taken to be the weighted median of the local
medians, with each median weighted by the number of elements on the corresponding processor.
This will again guarantee that a fixed fraction of the elements is dropped from consideration every
iteration. The number of iterations of the algorithm remains O(log n).
The dominant computational work in the median of medians algorithm is the computation of
the local median and scanning through the local elements to split them into two sets based on the
estimated median. In order to reduce this work which is repeated every iteration, the bucket-based
approach preprocesses the local data into O(log p) buckets such that for any 0 -
every element in bucket i is smaller than any element in bucket j. This can be accomplished by
finding the median of the local elements, splitting them into two buckets based on this median
and recursively splitting each of these buckets into log pbuckets using the same procedure. Thus,
preprocessing the local data into O(log p) buckets requires O( n
log log p) time.
Bucketing the data simplifies the task of finding the local median and the task of splitting the
local data into two sets based on the estimated median. To find the local median, identify the
bucket containing the median and find the rank of the median in the bucket containing the median
Algorithm 2 Bucket-based selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
Step 0. Partition L i on P i into log p buckets of equal size such that if r 2 bucket j , and s 2 bucketk , then r ! s if
whilen ? C (a constant)
Step 1. Find the bucket bktk containing the median element using a binary search on the remaining
buckets. This is followed by finding the appropriate rank in bktk to find the median m i . Let N i be the
number of remaining keys on P i .
2.
Step 3. On P0
Find the weighted median of M , say WM and broadcast it.
Step 4. Partition L i into - WM and ? WM using the buckets to give index i ; the split index
Step 5. count = Combine(index i , add) calculates the number of elements less than WM
Step 6. If (rank - count )
else
Step 7.
Step 8. On P0
Perform sequential selection to find element q of rank in L
Figure
2: Bucket-based selection algorithm
in O(log log p) time using binary search. The local median can be located in the bucket by the
sequential selection algorithm in O( n
time. The cost of finding the local median reduces from
O( n
log p ). To split the local data into two sets based on the estimated median,
first identify the bucket that should contain the estimated median. Only the elements in this bucket
need to be split. Thus, this operation also requires only O(log log
log p
time.
After preprocessing, the worst-case run time for selection is O(log log p log
log p log
log p log n+ -p log n) = O( n
log p
log log log p. Therefore, the
worst-case run time of the bucket-based approach is O( n
log
without any load balancing.
Algorithm 3 Randomized selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
whilen ? C (a constant)
Step
Step 1.
Step 2. Generate a random number nr (same on all processors) between 0 and
Step 3. On Pk where (nr
Step 4. Partition L i into - mguess and ? mguess to give index i , the split index
Step 5. count = Combine(index i , add) calculates the number of elements less than mguess
Step 6. If (rank - count )
else
Step 7.
Step 8. On P0
Perform sequential selection to find element q of rank in L
Figure
3: Randomized selection algorithm
3.3 Randomized Selection Algorithm
The randomized median finding algorithm (Figure 3) is a straightforward parallelization of the
randomized sequential algorithm described in [12]. All processors use the same random number
generator with the same seed so that they can produce identical random numbers. Consider the
behavior of the algorithm in iteration j. First, a parallel prefix operation is performed on the
's. All processors generate a random number between 1 and n (j) to pick an element at random,
which is taken to be the estimate median. From the parallel prefix operation, each processor can
determine if it has the estimated median and if so broadcasts it. Each processor scans through
its set of points and splits them into two subsets containing elements less than or equal to and
greater than the estimated median, respectively. A Combine operation and a comparison with k (j)
determines which of these two subsets to be discarded and the value of k (j+1) needed for the next
iteration.
Since in each iteration approximately half the remaining points are discarded, the expected
number of iterations is O(log n) [12]. Let n (j)
. Thus, splitting the set of points into
two subsets based on the median requires O(n (j)
in the j th iteration. The remaining work
is one Parallel Prefix, one Broadcast and one Combine operation. Therefore, the total expected
running time of the algorithm is P log
p ), the expected running time is O( n
log n). In practice,
one can expect that n (j)
max reduces from iteration to iteration, perhaps by half. This is especially
true if the data is randomly distributed to the processors, eliminating any order present in the
input. In fact, by a load balancing operation at the end of every iteration, we can ensure that for
every iteration j, n (j)
. With load balancing and ignoring the cost of it, the running
time of the algorithm reduces to P log
log n). Even
without this load balancing, assuming that the initial data is randomly distributed, the running
time is expected to be O( n
log n).
3.4 Fast Randomized Selection Algorithm
The expected number of iterations required for the randomized median finding algorithm is O(log n).
In this section we discuss an approach due to Rajasekharan et. al. [17] that requires only
O(log log n) iterations for convergence with high probability (Figure 4).
Suppose we want to find the k th smallest element among a given set of n elements. Sample a set
S of o(n) keys at random and sort S. The element with rank
e in S will have an expected
rank of k in the set of all points. Identify two keys l 1 and l 2 in S with ranks
ffi is a small integer such that with high probability the rank of l 1 is ! k and the rank of l 2 is ? k in
the given set of points. With this, all the elements that are either ! l 1 or ? l 2 can be eliminated.
Recursively find the element with rank in the remaining elements. If the number
of elements is sufficiently small, they can be directly sorted to find the required element.
If the ranks of l 1 and l 2 are both ! k or both ? k, the iteration is repeated with a different
sample set. We make the following modification that may help improve the running time of the
algorithm in practice. Suppose that the ranks of l 1 and l 2 are both ! k. Instead of repeating the
iteration to find element of rank k among the n elements, we discard all the elements less than l 2
and find the element of rank in the remaining elements. If the ranks of
l 1 and l 2 are both ? k, elements greater than l 1 can be discarded.
Rajasekharan et. al. show that the expected number of iterations of this median finding
algorithm is O(log log n) and that the expected number of points decreases geometrically after each
iteration. If n (j) is the number of points at the start of the j th iteration, only a sample of o(n (j) )
keys is sorted. Thus, the cost of sorting, o(n (j) log n (j) ) is dominated by the O(n (j) ) work involved
in scanning the points.
Algorithm 4 Fast randomized selection algorithm
Total number of elements
Total number of processors labeled from 0 to
List of elements on processor P i , where jL
desired rank among the total elements
On each processor P i
whilen ? C (a constant)
Step
Step 1. Collect a sample S i from L i [l; r] by picking n i n ffl
n elements at random on P i between l and r.
Step 2.
On P0
Step 3. Pick k1 , k2 from S with ranks d ijSj
jSjlogne and d ijSj
jSjlogne
Step 4. Broadcast k1 and k2.The rank to be found will be in [k1 , k2 ] with high probability.
Step 5. Partition L i between l and r into ! k1 , [k1 , k2 ] and ? k2 to give counts less, middle
and high and splitters s1 and s2 .
Step 6.
Step 7. cless = Combine(less, add)
Step 8. If (rank 2 (cless ; cmid ])
else
else
Step 9.
Step 10. On P0
Perform sequential selection to find element q of rank in L
Figure
4: Fast Randomized selection Algorithm
In iteration j, Processor P (j)
randomly selects n (j)
n (j) of its n (j)
elements. The selected elements
are sorted using a parallel sorting algorithm. Once sorted, the processors containing the elements
l (j)and l (j)broadcast them. Each processor finds the number of elements less than l (j)and greater
than l (j)
contained by it. Using Combine operations, the ranks of l (j)
1 and l (j)
are computed and
the appropriate action of discarding elements is undertaken by each processor. A large value of
ffl increases the overhead due to sorting. A small value of ffl increases the probability that both
the selected elements (l (j)
1 and l (j)
lie on one side of the element with rank k (j) , thus causing an
unsuccessful iteration. By experimentation, we found a value of 0:6 to be appropriate.
As in the randomized median finding algorithm, one iteration of the median finding algorithm
takes O(n (j)
log log n iterations are required, median
finding requires O( n
log log n + (-) log p log log n) time.
As before, we can do load balancing to ensure that n (j)
reduces by half in every iteration.
Assuming this and ignoring the cost of load balancing, the running time of median finding reduces
to
log log
log log n). Even without this load balancing,
the running time is expected to be O( n
log log n).
4 Algorithms for load balancing
In order to ensure that the computational load on each processor is approximately the same during
every iteration of a selection algorithm, we need to dynamically redistribute the data such that
every processor has nearly equal number of elements. We present three algorithms for performing
such a load balancing. The algorithms can also be used in other problems that require dynamic
redistribution of data and where there is no restriction on the assignment of data to processors.
We use the following notation to describe the algorithms for load balancing: Initially, processor
is the total number of elements on all the processors, i.e.
4.1 Order Maintaining Load Balance
Suppose that each processor has its set of elements stored in an array. We can view the n elements
as if they were globally sorted based on processor and array indices. For any i ! j, any element
in processor P i appears earlier in this sorted order than any element in processor P j . The order
maintaining load balance algorithm is a parallel prefix based algorithm that preserves this global
order of data after load balancing.
The algorithm first performs a Parallel Prefix operation to find the position of the elements it
contains in the global order. The objective is to redistribute data such that processor P i contains
Algorithm 5 Modified order maintaining load balance
- Number of total elements
Total number of processors labeled from 0 to
List of elements on processor P i of size n i
On each processor P i
Step
increment navg
Step 1.
Step 2. diff
Step 3. If diff [j] is positive P j is labeled as a source. If diff [j] is negative P j is labeled as
a sink.
Step 4. If P i is a source calculate the prefix sum of the positive diff [ ] in an array p src, else calculate
the prefix sums for sinks using negative diff [ ] in p snk.
Step 5. l
Step 7. Calculate the range of destination processors [P l ; Pr ] using a binary search on p snk.
Step 8. while(l - r)
elements to P l and increment l
Step 5. l
Step 7. Calculate the range of source processors [P l ; Pr ] using a binary search on
src.
Step 8. while( l - r)
Receive elements from P l and increment l
Figure
5: Modified order maintaining load balance
the elements with positions n avg in the global order. Using the parallel prefix
operation, each processor can figure out the processors to which it should send data and the amount
of data to send to each processor. Similarly, each processor can figure out the amount of data it
should receive, if any, from each processor. Communication is generated according to this and the
data is redistributed.
In our model of computation, the running time of this algorithm only depends on the maximum
communication generated/received by a processor. The maximum number of messages sent out by
a processor is d nmax
navg e+1 and the maximum number of elements sent is n max . The maximum number
of elements received by a processor is n avg . Therefore, the running time is O(- nmax
The order maintaining load balance algorithm may generate much more communication than
necessary. For example, consider the case where all processors have n avg elements except that P 0
has one element less and P p\Gamma1 has one element more than n avg . The optimal strategy is to transfer
the one extra element from P p to P 0 . However, this algorithm transfers one element from P i to
messages.
Since preserving the order of data is not important for the selection algorithm, the following
modification is done to the algorithm: Every processor retains minfn of its original elements.
the processor has (n excess and is labeled a source. Otherwise,
the processor needs (n avg \Gamma n i ) elements and is labeled a sink. The excessive elements in the source
processors and the number of elements needed by the sink processors are ranked separately using
two Parallel Prefix operations. The data is transferred from sources to sinks using a strategy similar
to the order maintaining load balance algorithm. This algorithm (Figure 5), which we call modified
order maintaining load balance algorithm (modified OMLB), is implemented in [5].
The maximum number of messages sent out by a processor in modified OMLB is O(p) and the
maximum number of elements sent is (n The maximum number of elements received
by a processor is n avg . The worst-case running time is O(-p
4.2 Dimension Exchange Method
The dimension exchange method (Figure 6) is a load balancing technique originally proposed for
hypercubes [11][21]. In each iteration of this method, processors are paired to balance the load
locally among themselves which eventually leads to global load balance. The algorithm runs in
log p iterations. In iteration i (0 processors that differ in the i th least significant
bit position of their id's exchange and balance the load. After iteration i, for any 0 -
processors have the same number of elements.
In each iteration, ppairs of processors communicate in parallel. No processor communicates
more than nmaxelements in an iteration. Therefore, the running time is O(- log
However, since 2 j processors hold the maximum number of elements in iteration j, it is likely that
either n max is small or far fewer elements than nmaxare communicated. Therefore, the running
time in practice is expected to be much better than what is predicated by the worst-case.
4.3 Global Exchange
This algorithm is similar to the modified order maintaining load balance algorithm except that
processors with large amounts of data are directly paired with processor with small amounts of
data to minimize the number of messages (Figure 7).
As in the modified order maintaining load balance algorithm, every processor retains minfn
of its original elements. If the processor has (n excess and is la-
Algorithm 6 Dimension exchange method
- Number of total elements
Total number of processors labeled from 0 to
List of elements on processor P i of size n i
On each processor P i
Step 1. P
Step 2. Exchange the count of elements between P
Step 3.
Step 4. Send elements from L i [navg ] to processor P l
Step 5. n
else
Step 4. Receive n l \Gamma navg elements from processor P l at
Step 5. Increment n i by n l \Gamma navg
Figure
exchange method for load balancing
beled a source. Otherwise, the processor needs (n avg \Gamma n i ) elements and is labeled a sink. All
the source processors are sorted in non-increasing order of the number of excess elements each
processor holds. Similarly, all the sink processors are sorted in non-increasing order of the number
of elements each processor needs. The information on the number of excessive elements in each
source processor is collected using a Global Concatenate operation. Each processor locally ranks
the excessive elements using a prefix operation according to the order of the processors obtained by
the sorting. Another Global Concatenate operation collects the number of elements needed by each
sink processor. These elements are then ranked locally by each processor using a prefix operation
performed using the ordering of the sink processors obtained by sorting.
Using the results of the prefix operation, each source processor can find the sink processors to
which its excessive elements should be sent and the number of element that should be sent to each
such processor. The sink processors can similarly compute information on the number of elements
to be received from each source processor. The data is transferred from sources to sinks. Since the
sources containing large number of excessive elements send data to sinks requiring large number of
elements, this may reduce the total number of messages sent.
In the worst-case, there may be only one processor containing all the excessive elements and thus
the total number of messages sent out by the algorithm is O(p). No processor will send more than
data and the maximum number of elements received by any processor is
n avg . The worst-case run time is O(-p
Algorithm 7 Global Exchange load balance
- Number of total elements
Total number of processors labeled from 0 to
List of elements on processor P i of size n i
On each processor P i
Step
increment navg
Step 1.
for j /0 to
Step 2. diff
Step 3. If diff [j] is positive P j is labeled as a source. If diff [j] is negative P j is labeled as
a sink.
Step 4. For k 2 [0; sources in descending order maintaining appropriate
processor indices. Also sort diff [k] for sinks in ascending order.
Step 5. If P i is a source calculate the prefix sum of the positive diff [ ] in an array p src, else calculate
the prefix sums for sinks using negative diff [ ] in p snk.
Step 6. If P i is a source calculate the prefix sum of the positive diff [ ] in an array p src, else calculate
the prefix sums for sinks using negative diff [ ] in p snk.
Step 7. l
8. r
Step 9. Calculate the range of destination processors [P l ; Pr ] using a binary search on p snk.
Step 10. while(l - r)
elements to P l and increment l
Step 7. l
8. r
Step 9. Calculate the range of source processors [P l ; Pr ] using a binary search on
src.
Step 10. while( l - r)
Receive elements from P l and increment l
Figure
7: Global exchange method for load balancing
Selection Algorithm Run-time
Median of Medians O( n
Randomized O( n
log n)
Fast randomized O( n
log log n)
Table
1: The running times of various selection algorithm assuming but not including the cost of
load balancing
Selection Algorithm Run-time
Median of Medians O( n
Bucket-based O( n
log
Randomized O( n
log log n)
Fast randomized O( n
log log n + (-) log p log log n)
Table
2: The worst-case running times of various selection algorithms
5 Implementation Results
The estimated running times of various selection algorithms are summarized in Table 1 and Table 2.
Table
1 shows the estimated running times assuming that each processor contains approximately
the same number of elements at the end of each iteration of the selection algorithm. This can be
expected to hold for random data even without performing any load balancing and we also observe
this experimentally. Table 2 shows the worst-case running times in the absence of load balancing.
We have implemented all the selection algorithms and the load balancing techniques on the CM-
5. To experimentally evaluate the algorithms, we have chosen the problem of finding the median of a
given set of numbers. We ran each selection algorithm without any load balancing and with each of
the load balancing algorithms described (except for the bucket-based approach which does not use
load balancing). We have run all the resulting algorithms on 32k, 64k, 128k, 256k, 512k, 1024k and
2048k numbers using 2, 4, 8, 16, 32, 64 and 128 processors. The algorithms are run until the total
number of elements falls below p 2 , at which point the elements are gathered on one processor and
the problem is solved by sequential selection. We found this to be appropriate by experimentation,
to avoid the overhead of communication when each processor contains only a small number of
elements. For each value of the total number of elements, we have run each of the algorithms on
two types of inputs - random and sorted. In the random case, n
p elements are randomly generated on
each processor. To eliminate peculiar cases while using the random data, we ran each experiment on
five different random sets of data and used the average running time. Random data sets constitute
close to the best case input for the selection algorithms. In the sorted case, the n numbers are
chosen to be the numbers containing the numbers i n
The sorted input is a close to the worst-case input for the selection algorithms. For example, after
the first iteration of a selection algorithm using this input, approximately half of the processors lose
all their data while the other half retains all of their data. Without load balancing, the number of
active processors is cut down by about half every iteration. The same is true even if modified order
maintaining load balance and global exchange load balancing algorithms are used. After every
iteration, about half the processors contain zero elements leading to severe load imbalance for the
load balancing algorithm to rectify. Only some of the data we have collected is illustrated in order
to save space.
The execution times of the four different selection algorithms without using load balancing for
random data (except for median of medians algorithm requiring load balancing for which global
exchange is used) with 128k, 512k and 2048k numbers is shown in Figure 8. The graphs clearly
demonstrate that all four selection algorithms scale well with the number of processors. An immediate
observation is that the randomized algorithms are superior to the deterministic algorithms by
an order of magnitude. For example, with the median of medians algorithm ran
at least 16 times slower and the bucket-based selection algorithm ran at least 9 times slower than
either of the randomized algorithms. Such an order of magnitude difference is uniformly observed
even using any of the load balancing techniques and also in the case of sorted data. This is not
surprising since the constants involved in the deterministic algorithms are higher due to recursively
finding the estimated median. Among the deterministic algorithms, the bucket-based approach
consistently performed better than the median of medians approach by about a factor of two for
random data. For sorted data, the bucket-based approach which does not use any load balancing
ran only about 25% slower than median of medians approach with load balancing.
In each iteration of the parallel selection algorithm, each processor also performs a local selection
algorithm. Thus the algorithm can be split into a parallel part where the processors combine the
results of their local selections and a sequential part involving executing the sequential selection
locally on each processor. In order to convince ourselves that randomized algorithms are superior
in either part, we ran the following hybrid experiment. We ran both the deterministic parallel
selection algorithms replacing the sequential selection parts by randomized sequential selection. The
running time of the hybrid algorithms was in between the deterministic and randomized parallel
selection algorithms. We made the following observation: The factor of improvement in randomized
parallel selection algorithms over deterministic parallel selection is due to improvements in both the
sequential and parallel parts. For large n, much of the improvement is due to the sequential part.
For large p, the improvement is due to the parallel part. We conclude that randomized algorithms
are faster in practice and drop the deterministic algorithms from further consideration.
Time
(in
seconds)
Number of Processors
Median of Medians
Bucket Based
Randomized
Fast Randomized0.0150.0250.0350.0450.0550.065
Time
(in
seconds)
Number of Processors
Randomized
Fast Randomized0.51.52.53.5
Time
(in
seconds)
Number of Processors
Median of Medians
Bucket Based
Randomized
Fast Randomized0.040.080.120.160.20.24
Time
(in
seconds)
Number of Processors
Randomized
Fast Randomized261014
Time
(in
seconds)
Number of Processors
Median of Medians
Bucket Based
Randomized
Fast Randomized0.10.30.50.7
Time
(in
seconds)
Number of Processors
Randomized
Fast Randomized
Figure
8: Performance of different selection algorithms without load balancing (except for median
of medians selection algorithm for which global exchange is used) on random data sets.
Time
(in
seconds)
Number of Processors
Random data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.10.30.50.7
Time
(in
seconds)
Number of Processors
Random data, n=2M
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.050.150.250.350.45
Time
(in
seconds)
Number of Processors
Sorted data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.20.611.41.8
Time
(in
seconds)
Number of Processors
Sorted data, n=2m
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange
Figure
9: Performance of randomized selection algorithm with different load balancing strategies
on random and sorted data sets.
To facilitate an easier comparison of the two randomized algorithms, we show their performance
separately in Figure 8. Fast randomized selection is asymptotically superior to randomized selection
for worst-case data. For random data, the expected running times of randomized and fast
randomized algorithms are O( n
log n) and O( n
log log n), respectively.
Consider the effect of increasing n for a fixed p. Initially, the difference in log n and log log n is not
significant enough to offset the overhead due to sorting in fast randomized selection and randomized
selection performs better. As n is increased, fast randomized selection begins to outperform
randomized selection. For large n, both the algorithms converge to the same execution time since
the O( n
dominates. Reversing this point view, we find that for any fixed n, as
we increase p, randomized selection will eventually perform better and this can be readily observed
in the graphs.
The effect of the various load balancing techniques on the randomized algorithms for random
data is shown in Figure 9 and Figure 10. The execution times are consistently better without
using any load balancing than using any of the three load balancing techniques. Load balancing
Time
(in
seconds)
Number of Processors
Random data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange0.10.30.50.7
Time
(in
seconds)
Number of Processors
Random data, n=2m
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global
Time
(in
seconds)
Number of Processors
Sorted data, n=512k
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global
Time
(in
seconds)
Number of Processors
Sorted data, n=2m
Balance
Mod. order Maintaining Load Balance
Dimension Exchange
Global Exchange
Figure
10: Performance of fast randomized selection algorithm with different load balancing strategies
on random and sorted data sets.
for random data almost always had a negative effect on the total execution time and this effect
is more pronounced in randomized selection than in fast randomized selection. This is explained
by the fact that fast randomized selection has fewer iterations (O(log log n) vs. O(log n)) and less
data in each iteration.
The observation that load balancing has a negative effect on the running time for random
data can be easily explained: In load balancing, a processor with more elements sends some of
its elements to another processor. The time taken to send the data is justified only if the time
taken to process this data in future iterations is more than the time for sending it. Suppose that
a processor sends m elements to another processor. The processing of this data involves scanning
it in each iteration based on an estimated median and discarding part of the data. For random
data, it is expected that half the data is discarded in every iteration. Thus, the estimated total
time to process this data is O(m). The time for sending the data is (-m), which is also O(m).
By observation, the constants involved are such that load balancing is taking more time than the
reduction in running time caused by it.
Time
(in
seconds)
Number of Processors
Comparing two randomized selection algorithms using sorted data for n=512k
Randomized
Fast Randomized0.20.611.41.8
Time
(in
seconds)
Number of Processors
Comparing two randomized selection algorithm using sorted datas for n=2M
Randomized
Fast Randomized
Figure
11: Performance of the two randomized selection algorithms on sorted data sets using the
best load balancing strategies for each algorithm \Gamma no load balancing for randomized selection and
modified order maintaining load balancing for fast randomized selection.
Consider the effect of the various load balancing techniques on the randomized algorithms for
sorted data (see Figure 9 and Figure 10). Even in this case, the cost of load balancing more than
offset the benefit of it for randomized selection. However, load balancing significantly improved the
performance of fast randomized selection.
In
Figure
11, we see a comparison of the two randomized algorithms for sorted data with the
best load balancing strategies for each algorithm \Gamma no load balancing for randomized selection and
modified order maintaining load balancing for fast randomized algorithm (which performed slightly
better than other strategies). We see that, for large n, fast randomized selection is superior. We
also observe (see Figure 11 and Figure 8) that the fast randomized selection has better comparative
advantage over randomized selection for sorted data.
Finally, we consider the time spent in load balancing itself for the randomized algorithms on
both random and sorted data (see Figure 12 and Figure 13). For both types of data inputs, fast
randomized selection spends much less time than randomized selection in balancing the load. This
is reflective of the number of times the load balancing algorithms are utilized (O(log log n) vs.
O(log n)). Clearly, the cost of load balancing increases with the amount of imbalance and the
number of processors. For random data, the overhead due to load balancing is quite tolerable
for the range of n and p used in our experiments. For sorted data, a significant fraction of the
execution time of randomized selection is spent in load balancing. Load balancing never improved
the running time of randomized selection. Fast randomized selection benefited from load balancing
for sorted data. The choice of the load balancing algorithm did not make a significant difference in
the running time.
Consider the variance in the running times between random and sorted data for both the
Number of Processors0.10.30.50.7Time
(in
seconds)
Randomized selection , random data
load balancing time
O
Number of Processors0.20.61.01.41.8
Time
(in
seconds)
Randomized selection , sorted data
load balancing time
O
Figure
12: Performance of randomized selection algorithm with different load balancing strategies
balancing (N), Order maintaining load balancing (O), Dimension exchange method (D)
and Global exchange (G).
Number of Processors0.10.30.5Time
(in
seconds)
Fast Randomized selection , random data
load balancing time
O
Number of Processors0.20.61.0Time
(in
seconds)
Fast Randomized selection , sorted data
load balancing time
OD
G
Figure
13: Performance of fast randomized selection algorithm with different load balancing strategies
balancing (N), Order maintaining load balancing (O), Dimension exchange method
(D) and Global exchange (G).
Primitive Two-level model Hypercube Mesh
Broadcast O((-) log p) O((-) log p) O(- log
Combine O((-) log p) O((-) log p) O(- log
Parallel Prefix O((-) log p) O((-) log p) O(- log
Gather O(- log
Global Concatenate O(- log
Transportation O(-p
Table
3: Running time for basic communication primitives on meshes and hypercubes using cut-through
routing. For the transportation primitive, t refers to the maximum of the total size of
messages sent out or received by any processor.
randomized algorithms. The randomized selection algorithm ran 2 to 2.5 times faster for random
data than for sorted data (see Figure 12). Using any of the load balancing strategies, there is very
little variance in the running time of fast randomized selection (Figure 13). The algorithm performs
equally well on both best and worst-case data. For the case of 128 processors the stopping criterion
results in execution of one iteration in most runs. Thus, load balancing has a detrimental
effect on the overall cost. We had decided to choose the same stopping criterion to provide a fair
comparison between the different algorithms. However, an appropriate fine tuning of this stopping
criterion and corresponding increase in the number of iterations should provide time improvements
with load balancing for 2M data size on 128 processors.
6 Selection on Meshes and Hypercubes
Consider the analysis of the algorithms presented for cut-through routed hypercubes and square
meshes with p processors. The running time of the various algorithms on meshes and hypercubes is
easily obtained by substituting the corresponding running times for the basic parallel communication
primitives used by the algorithms. Table 3 shows the time required for each parallel primitive
on the two-level model of computation, a hypercube of p processors and a p p \Theta
mesh. The analysis is omitted to save space and similar analysis can be found in [15].
Load balancing can be achieved by using the communication pattern of the transportation
primitive [24] which involves two all-to-all personalized communications. Each processor has O( n
elements to be sent out. The worst-case time of order maintaining load balance is O(-p
for the hypercube and mesh respectively when n ? O(-p
exchange load balancing algorithm on the hypercube has worst-case run time of O(- log p+- n
log p)
and on the mesh it is O(- log
The global exchange load balancing algorithmm has the
same time complexities as the modified order maintaining load balancing algorithm, both on the
hypercube and the mesh. These costs must be added to the selection algorithms if analysis of the
algorithms with load balancing is desired.
From the table, the running times of all the primitives remain the same on a hypercube and
hence the analysis and the experimental results obtained for the two-level model will be valid for
hypercubes. Thus, the time complexity of all the selection algorithms is the same on the hypercube
as the two-level model discussed in this paper. If the ratio of unit computation cost to the unit
communication cost is large, i.e the processor is much faster than the underlying communication
network, cost of load balancing will offset its advantages and fast randomized algorithm without
load balancing will have superior performance for practical scenarios.
Load balancing on the mesh results in asymptotically worse time requirements. We would
expect load balancing to be useful for small number of processors. For large number of processors,
even one step of load balancing would dominate the overall time and hence would not be effective.
In the following we present results for the performance for best case and worst case data on a mesh.
1. Deterministic Algorithms: The communication primitives used in the deterministic selection
algorithms are Gather, Broadcast and Combine. Even though Broadcast and Combine require
more time than the two-level model, their cost is absorbed by the time required for the Gather
operation which is identical on the mesh and the two-level model. Hence, the complexity of
the deterministic algorithms on the mesh remains the same as the two-level model. The total
time requirements for the median of medians algorithm are O( n
log n) for
the best case and O( n
log n) for the worst case. The bucket-based
deterministic algorithm runs in to O( n
log time in the
worst case without load balancing.
2. Randomized Algorithms: The communication for the the randomized algorithm includes one
PrefixSum, one Broadcast and one Combine. The communication time on a mesh for one
iteration of the randomized algorithm is O(- log p+- p
p), making its overall time complexity
O( n
log n) for the best case and O( n
log log n) for
the worst case data.
The fast randomized algorithm involves a parallel sort of the sample for which we use bitonic
sort. A sample of n ffl (0 chosen from the n elements and sorted. On the
mesh, sorting a sample of n ffl elements using bitonic sort takes O(
log
should be acceptably small to keep the sorting phase from dominating every iteration.
The run-time of the fast randomized selection on the mesh is O( n
for the best case data. For the worst case data the time requirement would be O( n
log log p+
7 Weighted selection
having a corresponding weight w i attached to
it. The problem of weighted selection is to find an element x i such that
any x l
As an example, the weighted median
will be the element that divides the data set S with sum of weights W , into two sets S 1 and S 2
with approximately equal sum of weights. Simple modifications can be made to the deterministic
algorithms to adapt them for weighted selection. In iteration j of the selection algorithms, a set
S (j) of elements is split into two subsets S (j)
1 and S (j)
2 and a count of elements is used to choose
the subset in which the desired element can be found. Weighted selection is performed as follows:
First, the elements of S (j) are divided into two subsets S (j)
1 and S (j)
2 as in the selection algorithm.
The sum of weights of all the elements in subset S (j)
1 is computed. Let k j be the weight metric
in iteration j. If k j is greater than the sum of weights of S (j)
1 , the problem reduces to performing
weighted selection with k
(j). Otherwise, we need to
perform weighted selection with k
1 . This method retains the property
that a guaranteed fraction of elements will be discarded at each iteration keeping the worst case
number of iterations to be O(log n). Therefore, both the median of medians selection algorithm
and the bucket-based selection algorithm can be used for weighted selection without any change in
their run time complexities.
The randomized selection algorithm can also be modified in the same way. However, the same
modification to the fast randomized selection will not work. This algorithm works by sorting a
sample of the data set and picking up two elements that with high probability lie on either side
of the element with rank k in sorted order. In weighted selection, the weights determine the
position of the desired element in the sorted order. Thus, one may be tempted to select a sample of
weights. However, this does not work since the weights of the elements should be considered in the
order of the sorted data and a list of the elements sorted according to the weights does not make
sense. Hence, randomized selection without load balancing is the best choice for parallel weighted
selection.
Conclusions
In this paper, we have tried to identify the selection algorithms that are most suited for fast execution
on coarse-grained distributed memory parallel computers. After surveying various algorithms,
we have identified four algorithms and have described and analyzed them in detail. We also considered
three load balancing strategies that can be used for balancing data during the execution of
the selection algorithms.
Based on the analysis and experimental results, we conclude that randomized algorithms are
faster by an order of magnitude. If determinism is desired, the bucket-based approach is superior
to the median of medians algorithm. Of the two randomized algorithms, fast randomized selection
with load balancing delivers good performance for all types of input distributions with very little
variation in the running time. The overhead of using load balancing with well-behaved data is
insignificant. Any of the load balancing techniques described can be used without significant
variation in the running time. Randomized selection performs well for well-behaved data. There is
a large variation in the running time between best and worst-case data. Load balancing does not
improve the performance of randomized selection irrespective of the input data distribution.
9
Acknowledgements
We are grateful to Northeast Parallel Architectures Center and Minnesota Supercomputing Center
for allowing us to use their CM-5. We would like to thank David Bader for providing us a copy of
his paper and the corresponding code.
--R
Deterministic selection in O(log log N) parallel time
The design and analysis of parallel algorithms
Parallel selection in O(log log n) time using O(n
An optimal algorithm for parallel selection
Practical parallel algorithms for dynamic data redistribution
Technical Report CMU-CS-90-190
Time bounds for selection
A parallel median algorithm
Introduction to algorithms.
Dynamic load balancing for distributed memory multiprocessors
Expected time bounds for selection
Selection on the Reconfigurable Mesh
An introduction to parallel algorithms
Introduction to Parallel Computing: Design and Analysis of Algorithms
Efficient computation on sparse interconnection networks
Unifying themes for parallel selection
Derivation of randomized sorting and selection algorithms
Randomized parallel selection
Programming a Hypercube Multicomputer
Efficient parallel algorithms for selection and searching on sorted matrices
Finding the median
Random Data Accesses on a Coarse-grained Parallel Machine II
Load balancing on a hypercube
--TR
--CTR
Ibraheem Al-Furaih , Srinivas Aluru , Sanjay Goil , Sanjay Ranka, Parallel Construction of Multidimensional Binary Search Trees, IEEE Transactions on Parallel and Distributed Systems, v.11 n.2, p.136-148, February 2000
David A. Bader, An improved, randomized algorithm for parallel selection with an experimental study, Journal of Parallel and Distributed Computing, v.64 n.9, p.1051-1059, September 2004
Marc Daumas , Paraskevas Evripidou, Parallel Implementations of the Selection Problem: A Case Study, International Journal of Parallel Programming, v.28 n.1, p.103-131, February 2000 | parallel algorithms;selection;parallel computers;coarse-grained;median finding;randomized algorithms;load balancing;meshes;hypercubes |
262003 | Parallel Incremental Graph Partitioning. | AbstractPartitioning graphs into equally large groups of nodes while minimizing the number of edges between different groups is an extremely important problem in parallel computing. For instance, efficiently parallelizing several scientific and engineering applications requires the partitioning of data or tasks among processors such that the computational load on each node is roughly the same, while communication is minimized. Obtaining exact solutions is computationally intractable, since graph partitioning is an NP-complete.For a large class of irregular and adaptive data parallel applications (such as adaptive graphs), the computational structure changes from one phase to another in an incremental fashion. In incremental graph-partitioning problems the partitioning of the graph needs to be updated as the graph changes over time; a small number of nodes or edges may be added or deleted at any given instant.In this paper, we use a linear programming-based method to solve the incremental graph-partitioning problem. All the steps used by our method are inherently parallel and hence our approach can be easily parallelized. By using an initial solution for the graph partitions derived from recursive spectral bisection-based methods, our methods can achieve repartitioning at considerably lower cost than can be obtained by applying recursive spectral bisection. Further, the quality of the partitioning achieved is comparable to that achieved by applying recursive spectral bisection to the incremental graphs from scratch. | Introduction
Graph partitioning is a well-known problem for which fast solutions are extremely important in parallel
computing and in research areas such as circuit partitioning for VLSI design. For instance, parallelization
of many scientific and engineering problems requires partitioning data among the processors in such a
fashion that the computation load on each node is balanced, while communication is minimized. This
is a graph-partitioning problem, where nodes of the graph represent computational tasks, and edges describe
the communication between tasks with each partition corresponding to one processor. Optimal partitioning
would allow optimal parallelization of the computations with the load balanced over various processors
and with minimized communication time. For many applications, the computational graph can be derived
only at runtime and requires that graph partitioning also be done in parallel. Since graph partitioning is
NP-complete, obtaining suboptimal solutions quickly is desirable and often satisfactory.
For a large class of irregular and adaptive data parallel applications such as adaptive meshes [2], the
computational structure changes from one phase to another in an incremental fashion. In incremental
graph-partitioning problems, the partitioning of the graph needs to be updated as the graph changes over
time; a small number of nodes or edges may be added or deleted at any given instant. A solution of the
previous graph-partitioning problem can be utilized to partition the updated graph, such that the time
required will be much less than the time required to reapply a partitioning algorithm to the entire updated
graph. If the graph is not repartitioned, it may lead to imbalance in the time required for computation on
each node and cause considerable deterioration in the overall performance. For many of these problems the
graph may be modified after every few iterations (albeit incrementally), and so the remapping must have
a lower cost relative to the computational cost of executing the few iterations for which the computational
structure remains fixed. Unless this incremental partitioning can itself be performed in parallel, it may
become a bottleneck.
Several suboptimal methods have been suggested for finding good solutions to the graph-partitioning
problem. For many applications, the computational graph is such that the vertices correspond to two-
or three-dimensional coordinates and the interaction between computations is limited to vertices that are
physically proximate. This information can be exploited to achieve the partitioning relatively quickly by
clustering physically proximate points in two or three dimensions. Important heuristics include recursive
coordinate bisection, inertial bisection, scattered decomposition, and index based partitioners [3, 6, 12, 11, 14,
16]. There are a number of methods which use explicit graph information to achieve partitioning. Important
heuristics include simulated annealing, mean field annealing, recursive spectral bisection, recursive spectral
multisection, mincut-based methods, and genetic algorithms [1, 4, 5, 7, 8, 9, 10, 13]. Since, the methods use
explicit graph information, they have wider applicability and produce better quality partitioning.
In this paper we develop methods which use explicit graph information to perform incremental graph-
partitioning. Using recursive spectral bisection, which is regarded as one of the best-known methods for
graph partitioning, our methods can partition the new graph at considerably lower cost. The quality of
partitioning achieved is close to that achieved by applying recursive spectral bisection from scratch. Further,
our algorithms are inherently parallel.
The rest of the paper is outlined as follows. Section 2 defines the incremental graph-partitioning problem.
Section 3 describes linear programming-based incremental graph partitioning. Section 4 describes a multilevel
approach to solve the linear programming-based incremental graph partitioning. Experimental results of our
methods on sample meshes are described in Section 5, and conclusions are given in Section 6.
Problem definition
Consider a graph represents a set of vertices, E represents a set of undirected edges, the
number of vertices is given by and the number of edges is given by jEj. The graph-partitioning
problem can be defined as an assignment scheme that maps vertices to partitions. We denote
by B(q) the set of vertices assigned to a partition q, i.e., qg.
The weight w i corresponds to the computation cost (or weight) of the vertex v i . The cost of an edge
given by the amount of interaction between vertices v 1 and v 2 . The weight of every partition
can be defined as
The cost of all the outgoing edges from a partition represent the total amount of communication cost
and is given by
We would like to make an assignment such that the time spent by every node is minimized, i.e.,
represents the ratio of cost of unit computation/cost of unit communication
on a machine. Assuming computational loads are nearly balanced (W (0) - W (1) -
the second term needs to be minimized. In the literature
P C(q) has also been used to represent the
communication.
Assume that a solution is available for a graph G(V; E) by using one of the many available methods in
the literature, e.g., the mapping function M is available such that
and the communication cost is close to optimal. Let G 0 be an incremental graph of G(V; E)
i.e., some vertices are added and some vertices are deleted. Similarly,
i.e., some edges are added and some are deleted. We would like to find a new mapping
that the new partitioning is as load balanced as possible and the communication cost is minimized.
The methods described in this paper assume that G 0 sufficiently similar to G(V; E) that this
can be achieved, i.e., the number of vertices and edges added/deleted are a small fraction of the original
number of vertices and edges.
3 Incremental partitioning
In this section we formulate incremental graph partitioning in terms of linear programming. A high-level
overview of the four phases of our incremental graph-partitioning algorithm is shown in Figure 1. Some
notation is in order.
Let
1. P be the number of partitions.
2. represent the set of vertices in partition i.
3. - represent the average load for each partition
The four steps are described in detail in the following sections.
1: Assign the new vertices to one of the partitions (given by M 0 ).
Step 2: Layer each partition to find the closest partition for each vertex (given by L 0 ).
Step 3: Formulate the linear programming problem based on the mapping of Step 1 and balance loads (i.e., modify M 0 ) minimizing
the total number of changes in M 0 .
Step 4: Refine the mapping in Step 2 to reduce the communication cost.
Figure
1: The different steps used in our incremental graph-partitioning algorithm.
3.1 Assigning an initial partition to the new nodes
The first step of the algorithm is to assign an initial partition to the nodes of the new graph (given by
simple method for initializing M 0 (V ) is given as follows. Let
For all the vertices
d(v; x) is the shortest distance in the graph G 0 For the examples considered in this paper we assume
that G 0 is connected. If this is not the case, several other strategies can be used.
is connected, this graph can be used instead of G for calculation of M 0 (V ).
connected, then the new nodes that are not connected to any of the old
nodes can be clustered together (into potentially disjoint clusters) and assigned to the partition that
has the least number of vertices.
For the rest of the paper we will assume that M 0 (v) can be calculated using the definition in (7), although
the strategies developed in this paper are, in general, independent of this mapping. Further, for ease of
presentation, we will assume that the edge and the vertex weights are of unit value. All of our algorithms
can be easily modified if this is not the case. Figure 2 (a) describes the mapping of each the vertices of a
graph.
Figure
(b) describes the mapping of the additional vertices using the above strategy.
3.2 Layering each partition
The above mapping would ordinarily generate partitions of unequal size. We would like to move vertices
from one partition to another to achieve load balancing, while keeping the communication cost as small as
possible. This is achieved by making sure that the vertices transferred between two partitions are close to
the boundary of the two partitions. We assign each vertex of a given partition to a different partition it is
close to (ties are broken arbitrarily).
(b)
Figure
2: (a) Initial Graph (b) Incremental Graph (New vertices are shown by "*").
where x is such that
min
is is the shortest distance in the graph between v and x.
A simple algorithm to perform the layering is given in Figure 3. It assumes the graph is connected. Let
the number of such vertices of partition i that can be moved to partition j. For the example
case of Figure 3, labels of all the vertices are given in Figure 4. A label 2 of vertex in partition 1 corresponds
to the fact that this vertex belongs to the set that contributed to ff 12 .
3.3 Load balancing
the number of vertices to be moved from partition i to partition j to achieve load balance.
There are several ways of achieving load balancing. However, since one of our goals is to minimize communication
cost, we would like to minimize
l ij , because this would correspond to a minimization of the
amount of vertex movement (or "deformity") in the original partitions. Thus the load-balancing step can be
formally defined as the following linear programming problem.
Minimize X
subject to
Constraint 12 corresponds to the load balance condition.
The above formulation is based on the assumption that changes to the original graph are small and
the initial partitioning is well balanced, hence moving the boundaries by a small amount will give balanced
partitioning with low communication cost.
f map[v[j]] represents the mapping of vertex j. g
represents the j th element of the local adjacent list in partition i. g
represents the starting address of vertex j in the local adjacent list of partition i. g
i represents the set of vertices of partition i at a distance k from a node in partition j.
f Neighbor i represents the set of partitions which have common boundaries with partition i. g
For each partition i do
For vertex do
For do
Count
if
l
Add v[j] into S (tag;0)
f where
level := 0
repeat
For do
For vertex v[j] 2 S (k;level)
do
For l /\Gamma xadj i [v[j]] to xadj do
count
Add v[j] into tmpS
level
For vertex v[j] 2 tmpS do
Add v[j] into S (tag;level)
f where count i
until
For do
0-k!level
Figure
3: Layering Algorithm
(b)
Figure
4: Labeling the nodes of a graph to the closest outside partition; (a) a microscopic view of the layering
for a graph near the boundary of three partitions; (b) layering of the graph in Figure 2 (b); no edges are
shown.
Constraints in (11):
l
Constraints in (12):
\Gammal
\Gammal
Solution using the Simplex Method
l
all other values are zero.
Figure
5: Linear programming formulation and its solution, based on the mapping of the graph in Figure 2;
(b) using the labeling information in Figure 4 (b).
There are several approaches to solving the above linear programming problem. We decided to use the
simplex method because it has been shown to work well in practice and because it can be easily parallelized. 1
The simplex formulation of the example in Figure 2 is given in Figure 5. The corresponding solution is l
and l 1. The new partitioning is given in Figure 6.
20Initial partitions
Incremental partitions
Figure
The new partition of the graph in Figure 2 (b) after the Load Balancing step.
The above set of constraints may not have a feasible solution. One approach is to relax the constraint in
(11) and not have l ij constraint. Clearly, this would achieve load balance but may lead to major
modifications in the mapping. Another approach is to replace the constraint in (12) by
Assuming would not achieve load balancing in one step, but several such steps can be
applied to do so. If a feasible solution cannot be found with a reasonable value of \Delta (within an upper bound
C), it would be better to start partitioning from scratch or solve the problem by adding only a fraction of
the nodes at a given time, i.e., solve the problem in multiple stages. Typically, such cases arise when all the
new nodes correspond to a few partitions and the amount of incremental change is greater than the size of
one partition.
3.4 Refinement of partitions
The formulation in the previous section achieves load balance but does not try explicitly to reduce the number
of cross-edges. The minimization term in (10) and the constraint in (11) indirectly keep the cross-edges to
a minimum under the assumption that the initial partition is good. In this section we describe a linear
programming-based strategy to reduce the number of cross-edges, while still maintaining the load balance.
This is achieved by finding all the vertices of partitions i on the boundary of partition i and j such that
the cost of edges to the vertices in j are larger than the cost of edges to local vertices (Figure 7), i.e., the
total cost of cross-edges will decrease by moving the vertex from partition i to j, which will affect the load
We have used a dense version of simplex algorithm. The total time can potentially be reduced by using sparse representation.
local
non-local edge to partition
non-local edge to partition = 3
(a)
Figure
7: Choosing vertices for refinement. (a) Microscopic view of a vertex which can be moved from
partition P i to P j , reducing the number of cross edges; (b) the set of vertices with the above property in the
partition of Figure 6.
balance. In the following a linear programming formulation is given that moves the vertices while keeping
the load balance.
mapping of each vertex after the load-balancing step. Let out(k;
represent the number of edges of vertex k in partition M 00 (k) connected to partition j(j 6= M 00 (k)), and let
represent the number of vertices a vertex k is connected to in partition M 00 (k). Let b ij represent the
number of vertices in partition i which have more outgoing edges to partition j than local edges.
We would like to maximize the number of vertices moved so that moving a vertex will not increase the
cost of cross-edges. The inequality in the above definition can be changed to a strict inequality. We leave
the equality, however, since by including such vertices the number of points that can be moved can be larger
(because these vertices can be moved to satisfy load balance constraints without affecting the number of
cross-edges).
The refinement problem can now be posed as the following linear programming problem:
Maximize X
such that
This refinement step can be applied iteratively until the effective gain by the movement of vertices is
small. After a few steps, the inequalities (l ij need to be replaced by strict inequalities (l ij
Constraint (15)
l
Load Balancing Constraint (16)
\Gammal
\Gammal
Solution using Simplex Method
l
Figure
8: Formulation of the refinement step using linear programming and its solution.
otherwise, vertices having an equal number of local and nonlocal vertices may move between boundaries
without reducing the total cost. The simplex formulation of the example in Figure 6 is given in Figure 8,
and the new partitioning after refinement is given in Figure 9.
20Incremental partitions
Refined partitions
Figure
9: The new partition of the graph in Figure 6 after the Refinement step.
3.5 Time complexity
Let the number of vertices and the number of edges in a graph be given by V and E, respectively. The time
for layering is O(V +E). Let the number of partitions be P and the number of edges in the partition graph 2
Each node of this graph represents a partition. An edge in the super graph is present whenever there are any cross edges
from a node of one partition to a node of another partition.
be R. The number of constraints and variables generated for linear programming are O(P +R) and O(2R),
respectively.
Thus the time required for the linear programming is O((P +R)R). Assuming R is O(P ), this reduces to
The number of iterations required for linear programming is problem dependent. We will use f(P
to denote the number of iterations. Thus the time required for the linear programming is O(P 2 f(P )). This
gives the total time for repartitioning as O(E
The parallel time is considerably more difficult to analyze. We will analyze the complexity of neglecting
the setup overhead of coarse-grained machines. The parallel time complexity of the layering step depends on
the maximum number of edges assigned to any processor. This could be approximated by O(E=P ) for each
level, assuming the changes to the graph are incremental and that the graph is much larger than the number
of processors. The parallelization of the linear programming requires a broadcast of length proportional to
O(P ). Assuming that a broadcast of size P requires b(P ) amount of time on a parallel machine with P
processors, the time complexity can be approximated by O( E
4 A multilevel approach
For small graphs a large fraction of the total time spent in the algorithm described in the previous section
will be on the linear programming formulation and its solution. Since the time required for one iteration
of the linear programming formulation is proportional to the square of the number of partitions, it can be
substantially reduced by using a multilevel approach. Consider the partitioning of an incremental graph
for partitions. This can be completed in two stages: partitioning the graph into 4 super partitions and
partitioning each of the 4 super partitions into 4 partitions each. Clearly, more than two stages can be used.
The advantage of this algorithm is that the time required for applying linear programming to each stage
would be much less than the time required for linear programming using only one stage. This is due to a
substantial reduction in the number of variables as well as in the constraints, which are directly dependent
on the number of of partitions. However, the mapping initialization and the layering needs to be performed
from scratch for each level. Thus the decrease in cost of linear programming leads to a potential increase in
the time spent in layering.
The multilevel algorithm requires combining the partitions of the original graph into super partitions.
For our implementations, recursive spectral bisection was used as an ab initio partitioning algorithm. Due
to its recursive property it creates a natural hierarchy of partitions. Figure 10 shows a two-level hierarchy
of partitions. After the linear programming-based algorithm has been applied for repartitioning a graph
that has been adapted several times, it is possible that some of the partitions corresponding to a lower level
subtree have a small number of boundary edges between them. Since the multilevel approach results in
repartitioning with a small number of partitions at the lower levels, the linear programming formulations
may produce infeasible solutions at the lower levels. This problem can be partially addressed by reconfiguring
the partitioning hierarchy.
A simple algorithm can be used to achieve reconfiguration. It tries to group proximate partitions
to form a multilevel hierarchy. At each level it tries to combine two partitions into one larger parti-
tion. Thus the number of partitions is reduced by a factor of two at every level by using a procedure
FIND UNIQUE NEIGHBOR(P ) (Figure 11), which finds a unique neighbor for each partition such that the
number of cross-edges between them is as large as possible. This is achieved by applying a simple heuristic
Figure
12) that uses a list of all the partitions in a random order (each processor has a different order). If
more than one processor is successful in generating a feasible solution, ties are broken based on the weight
and the processor number. The result of the merging is broadcast to all the processors. In case none of the
Figure
10: A two-level hierarchy of 16 partitions
partitions g
represents the number of edges from partition i to partition j. g
global success := FALSE
trial := 0
While (not global success) and (trial ! T ) do
For each processor i do
list of all partitions in a random order
Weight := 0
FIND PAIR(success;Mark;Weight; Edge)
global success := GLOBAL OR(success)
if (not global success) then
FIX PAIR(success;Mark;Weight; Edge)
global success := GLOBAL OR(success)
if (global success) then
winner := FIND WINNER(success;Weight)
f Return the processor number of maximum Weight g
f Processor winner broadcast Mark to all the processors g
return(global success)
else
trial := trial+1
Figure
Reconstruction Algorithm
FIND PAIR(success; Mark;W eight; Edge)
success := TRUE
for
Find a neighbor k of j where (Mark[k] !
if k exists then
Weight
else
success := FALSE
FIX PAIR(success; Mark;W eight; Edge)
success := TRUE
While (j ! P ) and (success) do
if a x exists such that (Mark[x] ! 0), (x is a neighbor of l), is a neighbor of
Mark[x] := l Mark[l] := x
Weight
else
success := FALSE
else
Figure
12: A high level description of the procedures used in FIND UNIQUE NEIGHBOR.
processors are successful, another heuristic (Figure 12) is applied that tries to modify the partial assignments
made by heuristic 1 to find a neighbor for each partition. If none of the processors are able to find a feasible
solution, each processor starts with another random solution and the above step is iterated a constant
number (L) times. 3 Figure 11 shows the partition reconfiguration for a simple example.
If the reconfiguration algorithm fails, the multilevel algorithm can be applied with a lower number of
levels (or only one level).
Random_list
Random_list
Random_list
Random_list
(a) (b) (a)
(d) (e) (f)
Figure
13: A working example of the reconstruction algorithm. (a) Graph with 4 partitions; (b) partition
(c) adjacency lists; (d) random order lists; (e) partition rearrangement; (f) processor 3 broadcasts
the result to the other processors; (g) hierarchy after reconfiguration.
3 In practice, we found that the algorithm never requires more than one iteration.
4.1 Time complexity
In the following we provide an analysis assuming that reconfiguration is not required. The complexity of
reconfiguration will be discussed later. For the multilevel approach we assume that at each level the number
of partitions done is equal and given by k. Thus the number of levels generated is log k P . The time required
for layering increases to O(Elog k P ). The number of linear programming formulations can be given by
O( P
Thus the total time for linear programming can be given by O( P
f(k)). The total time required
for repartitioning is given by O(Elog k P value of k would minimize the sum of
the cost of layering and the cost of the linear programming formulation. The choice of k also depends on the
quality of partitioning achieved; increasing the number of layers would, in general, have a deteriorating effect
on the quality of partitioning. Thus values of k have to be chosen based on the above tradeoffs. However, the
analysis suggests that for reasonably sized graphs the layering time would dominate the total time. Since the
layering time is bounded by O(ElogP ), this time is considerably lower than applying spectral bisection-based
methods from scratch.
Parallel time is considerably more difficult to analyze. The parallel time complexity of the layering step
depends on the maximum number of edges any processor has for each level. This can be approximated by
each level, assuming the changes to the graph are incremental and that the graph is much larger
than the number of processors. As discussed earlier, the parallelization of linear programming requires a
broadcast of length proportional to O(k). For small values of k, each linear programming formulation has to
be executed on only one processor, else the communication will dominate the total time. Thus the parallel
time is proportional to O( E
The above analysis did not take reconfiguration into account. The cost of reconfiguration requires O(kd 2 )
time in parallel for every iteration, where d is the average number of partitions to which every partition is
connected. The total time is O(kd 2 log P ) for the reconfiguration. This time should not dominate the total
time required by the linear programming algorithm.
5 Experimental results
In this section we present experimental results of the linear programming-based incremental partitioning
methods presented in the previous section. We will use the term "incremental graph partitioner" (IGP) to
refer to the linear programming based algorithm. All our experiments were conducted on the 32-node CM-5
available at NPAC at Syracuse University.
Meshes
We used two sets of adaptive meshes for our experiments. These meshes were generated using the DIME
environment [15]. The initial mesh of Set A is given in Figure 14 (a). The other incremental meshes are
generated by making refinements in a localized area of the initial mesh. These meshes represent a sequence
of refinements in a localized area. The number of nodes in the meshes are 1071, 1096, 1121, 1152, and 1192,
respectively.
The partitioning of the initial mesh (1071 nodes) was determined using recursive spectral bisection. This
was the partitioning used by algorithm IGP to determine the partitioning of the incremental mesh (1096
nodes). The repartitioning of the next set of refinement (1121, 1152, and 1192 nodes, respectively) was
achieved using the partitioning obtained by using the IGP for the previous mesh in the sequence. These
meshes are used to test whether IGP is suitable for repartitioning a mesh after several refinements.
Figure
14: Test graphs set A (a) an irregular graph with 1071 nodes and 3185 edges; (b) graph in (a) with
additional nodes; (c) graph in (b) with 25 additional nodes; (d) graph in (c) with 31 additional nodes;
graph in (d) with 40 additional nodes.
Figure
15: Test graphs Set B (a) a mesh with 10166 nodes and 30471 edges; (b) mesh a with 48 additional
nodes; (c) mesh a with 139 additional nodes; (d) mesh a with 229 additional nodes; (e) mesh a with 672
Results
Initial Graph - Figure 14 (a)
Total Cutset Max Cutset Min Cutset
Figure 14 (b)
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 31.71 - 733 56 33
IGP 14.75 0.68 747 55 34
IGP with Refinement 16.87 0.88 730 54 34
Figure 14 (c)
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 34.05 - 732 56 34
IGP 13.63 0.73 752 54 33
IGP with Refinement 16.42 1.05 727 54 33
Figure 14 (d)
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 34.96 - 716 57 34
IGP 15.89 0.92 757 56 33
IGP with Refinement 18.32 1.28 741 56 33
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 38.20 - 774 63 34
IGP 15.69 0.94 815 63 34
IGP with Refinement 18.43 1.26 779 59 34
Time unit in seconds. p - parallel timing on a 32-node CM-5. s - timing on 1-node CM-5.
Figure
Incremental graph partitioning using linear programming and its comparison with spectral bisection
from scratch for meshes in Figure 14 (Set A).
The next data set (Set B) corresponds to highly irregular meshes with 10166 nodes and 30471 edges.
This data set was generated to study the effect of different amounts of new data added to the original mesh.
Figures
17 (b), 17 (c), 17 (d), and 17 (e) correspond to meshes with 68, 139, 229, and 672 additional nodes
over the mesh in Figure 15.
The results of the one-level IGP for Set A meshes are presented in Figure 16. The results show that, even
after multiple refinements, the quality of partitioning achieved is comparable to that achieved by recursive
spectral bisection from scratch, thus this method can be used for repartitioning several stages. The time
required by repartitioning is about half the time required for partitioning using RSB. The algorithm provides
speedup of around 15 to 20 on a 32-node CM-5. Most of the time spent by our algorithm is in the solution
of the linear programming formulation using the simplex method. The number of variables and constraints
generated by the one-level linear programming algorithm for the load-balancing step for meshes in Figure
partitions are 188 and 126, respectively.
For the multilevel approach, the linear programming formulation for each subproblem at a given level
Initial Graph - Figure 15 (a)
Total Cutset Max Cutset Min Cutset
(b) Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 800.05 - 2137 178 90
IGP before Refinement 13.90 1.01 2139 186 84
IGP after Refinement 24.07 1.83 2040 172 82
(c) Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 814.36 - 2099 166 87
IGP before Refinement 18.89 1.08 2295 219 93
IGP after Refinement 29.33 2.01 2162 206 85
(d) Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 853.35 - 2057 169 94
IGP before Refinement (2) 35.98 2.08 2418 256 92
IGP after Refinement 43.86 2.76 2139 190 85
Initial assignment by IGP using the partition of Figure 15 (a')
Partitioner Time-s Time-p Total Cutset Max Cutset Min Cutset
Spectral Bisection 904.81 - 2158 158 94
IGP before Refinement (3) 76.78 3.66 2572 301 102
IGP after Refinement 89.48 4.39 2270 237 96
Time unit in seconds. p - parallel timing on a 32-node CM-5. s - timing on 1-node CM-5.
Figure
17: Incremental graph partitioning using linear programming and its comparison with spectral bisection
from scratch for meshes in Figure 15 (Set B).
was solved by assigning a subset of processors. Table 19 gives the time required for different algorithms and
the quality of partitioning achieved for different numbers of levels. A 4 \Theta 4 \Theta 2-based repartitioning implies
that the repartitioning was performed in three stages with decomposition into 4, 4, 2 partitions, respectively.
The results are presented in Figure 19. The solution qualities of multilevel algorithms show an insignificant
deterioration in number of cross edges and a considerable reduction in total time.
The partitioning achieved by algorithm IGP for Set B meshes in Figure 18 using the partition of mesh
in
Figure
15 (a) is given in Figure 17. The number of stages required (by choosing an appropriate value of
\Delta, as described in section 2.3) were 1, 1, 2, and 3, respectively. 4 It is worth noting that, although the load
imbalance created by the additional nodes was severe, the quality of partitioning achieved for each case was
close to that of applying recursive spectral bisection from scratch. Further, the sequential time is at least
an order of magnitude better than that of recursive spectral bisection. The CM-5 implementation improved
the time required by a factor of 15 to 20. The time required for repartitioning Figure 17 (b) and Figure 17
(c) is close to that required for meshes in Figure 14. The timings for meshes in Figure 17 (d) and 17 (e) are
larger because they use multiple stages. The time can be reduced by using a multilevel approach (Figure 20).
However, the time reduction is relatively small (from 24.07 seconds to 6.70 seconds for a two-level approach).
Increasing the number of levels increases the total time as the cost of layering increases. The time reduction
for the last mesh (10838 nodes) is largely due to the reduction of the number of stages used in the multilevel
algorithm (Section 3.3). For almost all cases a speedup of 15 to 25 was achieved on a 32-node CM-5.
Figure
21 and Figure 22 show the detailed timing for different steps for the mesh in Figure 14 (d) and
mesh in Figure 15 (b) of the sequential and parallel versions of the repartitioning algorithm, respectively.
Clearly, the time spent in reconfiguration is negligible compared to the total execution time. Also, the time
spent for linear programming in a multilevel algorithm is much less than that in a single-level algorithm.
The results also show that the time for the linear programming remains approximately the same for both
meshes, while the time for layering is proportionally larger. For the multilevel parallel algorithm, the time for
layering is comparable with the time spent on linear programming for the smaller mesh, while it dominates
the time for the larger mesh. Since the layering term is O(levels E
results support the complexity
analysis in the previous section. The time spent on reconfiguration is negligible compared to the total time.
6 Conclusions
In this paper we have presented novel linear programming-based formulations for solving incremental graph-partitioning
problems. The quality of partitioning produced by our methods is close to that achieved by
applying the best partitioning methods from scratch. Further, the time needed is a small fraction of the
latter and our algorithms are inherently parallel. We believe the methods described in this paper are of
critical importance in the parallelization of adaptive and incremental problems.
4 The number of stages chosen were by trial and error, but can be determined by the load imbalance.
(b
Figure
Partitions using RSB; (b 0 ) partitions using IGP starting from a using IGP
starting from a 0 ; (d 0 ) partitions using IGP starting from a 0 ; using IGP starting from a 0 .
Graph Level Description Time-s Time-p Total Cutset
Time unit in seconds on CM-5.
Figure
19: Incremental multilevel graph partitioning using linear programming and its comparison with
single-level graph partitioning for the sequence of graphs in Figure 14.
Graph Level Description Time-s Time-p Total Cutset
Time unit in seconds on CM-5.
Figure
20: Incremental multilevel graph partitioning using linear programming and its comparison with
single-level graph partitioning for the sequence of meshes in Figure 15.
in Figure 14 (d)
Level Reconfiguration Layering Linear programming Total
Figure 15 (b)
Level Reconfiguration Layering Linear programming Total
Time in seconds
Balancing. R - Refinement. T - Total.
Figure
21: Time required for different steps in the sequential repartitioning algorithm.
in Figure 14 (d)
Level Reconfiguration Layering Linear programming Data movement Total
Figure 15 (b)
Level Reconfiguration Layering Linear programming Data movement Total
Time in seconds
Balancing. R - Refinement. T - Total.
Figure
22: Time required for different steps in the parallel repartitioning algorithm (on a 32-node CM-5).
--R
Solving Problems on Concurrent Processors
Software Support for Irregular and Loosely Synchronous Problems.
Heuristic Approaches to Task Allocation for Parallel Computing.
Load Balancing Loosely Synchronous Problems with a Neural Network.
Solving Problems on Concurrent Processors
Graphical Approach to Load Balancing and Sparse Matrix Vector Multiplication on the Hypercube.
An Improved Spectral Graph Partitioning Algorithm for Mapping Parallel Computations.
Multidimensional Spectral Load Balancing.
Genetic Algorithms for Graph Partitioning and Incremental Graph Partitioning.
Physical Optimization Algorithms for Mapping Data to Distributed-Memory Multi- processors
Solving Finite Element Equations on Current Computers.
Fast Mapping And Remapping Algorithm For Irregular and Adaptive Problems.
Partitioning Sparse Matrices with Eigenvectors of Graphs.
Partitioning of Unstructured Mesh Problems for Parallel Processing.
DIME: Distributed Irregular Mesh Enviroment.
Performance of Dynamic Load-Balancing Algorithm for Unstructured Mesh Calcula- tions
--TR
--CTR
Sung-Ho Woo , Sung-Bong Yang, An improved network clustering method for I/O-efficient query processing, Proceedings of the 8th ACM international symposium on Advances in geographic information systems, p.62-68, November 06-11, 2000, Washington, D.C., United States
Mark J. Clement , Glenn M. Judd , Bryan S. Morse , J. Kelly Flanagan, Performance Surface Prediction for WAN-Based Clusters, The Journal of Supercomputing, v.13 n.3, p.267-281, May 1999
Don-Lin Yang , Yeh-Ching Chung , Chih-Chang Chen , Ching-Jung Liao, A Dynamic Diffusion Optimization Method for Irregular Finite Element Graph Partitioning, The Journal of Supercomputing, v.17 n.1, p.91-110, Aug. 2000
Ching-Jung Liao , Yeh-Ching Chung, Tree-Based Parallel Load-Balancing Methods for Solution-Adaptive Finite Element Graphs on Distributed Memory Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.10 n.4, p.360-370, April 1999
John C. S. Lui , M. F. Chan, An Efficient Partitioning Algorithm for Distributed Virtual Environment Systems, IEEE Transactions on Parallel and Distributed Systems, v.13 n.3, p.193-211, March 2002
Yeh-Ching Chung , Ching-Jung Liao , Don-Lin Yang, A Prefix Code Matching Parallel Load-Balancing Method for Solution-Adaptive Unstructured Finite Element Graphs on Distributed Memory Multicomputers, The Journal of Supercomputing, v.15 n.1, p.25-49, Jan. 2000
Umit Catalyurek , Cevdet Aykanat, A hypergraph-partitioning approach for coarse-grain decomposition, Proceedings of the 2001 ACM/IEEE conference on Supercomputing (CDROM), p.28-28, November 10-16, 2001, Denver, Colorado
Umit Catalyurek , Cevdet Aykanat, Hypergraph-Partitioning-Based Decomposition for Parallel Sparse-Matrix Vector Multiplication, IEEE Transactions on Parallel and Distributed Systems, v.10 n.7, p.673-693, July 1999
Cevdet Aykanat , B. Barla Cambazoglu , Ferit Findik , Tahsin Kurc, Adaptive decomposition and remapping algorithms for object-space-parallel direct volume rendering of unstructured grids, Journal of Parallel and Distributed Computing, v.67 n.1, p.77-99, January, 2007
Y. F. Hu , R. J. Blake, Load balancing for unstructured mesh applications, Progress in computer research, Nova Science Publishers, Inc., Commack, NY, 2001 | remapping;mapping;refinement;parallel;linear-programming |
262369 | Computing Accumulated Delays in Real-time Systems. | We present a verification algorithm for duration properties of real-time systems. While simple real-time properties constrain the total elapsed time between events, duration properties constrain the accumulated satisfaction time of state predicates. We formalize the concept of durations by introducing duration measures for timed automata. A duration measure assigns to each finite run of a timed automaton a real number the duration of the run which may be the accumulated satisfaction time of a state predicate along the run. Given a timed automaton with a duration measure, an initial and a final state, and an arithmetic constraint, the duration-bounded reachability problem asks if there is a run of the automaton from the initial state to the final state such that the duration of the run satisfies the constraint. Our main result is an (optimal) PSPACE decision procedure for the duration-bounded reachability problem. | Introduction
Over the past decade, model checking [CE81, QS81] has emerged as a powerful tool for the automatic
verification of finite-state systems. Recently the model-checking paradigm has been extended to
real-time systems [ACD93, HNSY94, AFH96]. Thus, given the description of a finite-state system
together with its timing assumptions, there are algorithms that test whether the system satisfies
a specification written in a real-time temporal logic. A typical property that can be specified in
real-time temporal logics is the following time-bounded causality property:
A response is obtained whenever a ringer has been pressed continuously for 2 seconds.
Standard real-time temporal logics [AH92], however, have limited expressiveness and cannot specify
some timing properties we may want to verify of a given system. In particular, they do not allow
us to constrain the accumulated satisfaction times of state predicates. As an example, consider the
following duration-bounded causality property:
A response is obtained whenever a ringer has been pressed, possibly intermittently, for
a total duration of 2 seconds. ( )
A preliminary version of this paper appeared in the Proceedings of the Fifth International Conference on
Computer-Aided Verification (CAV 93), Springer-Verlag LNCS 818, pp. 181-193, 1993.
y Bell Laboratories, Murray Hill, New Jersey, U.S.A.
z Department of Computer Science, University of Crete, and Institute of Computer Science, FORTH, Greece.
Partially supported by the BRA ESPRIT project REACT.
x Department of Electrical Engineering and Computer Sciences, University of California at Berkeley, U.S.A. Partially
supported by the ONR YIP award N00014-95-1-0520, by the NSF CAREER award CCR-9501708, by the
NSF grants CCR-9200794 and CCR-9504469, by the AFOSR contract F49620-93-1-0056, and by the ARPA grant
NAG2-892.
To specify this duration property, we need to measure the accumulated time spent in the state
that models "the ringer is pressed." For this purpose, the concept of duration operators on state
predicates is introduced in the Calculus of Durations [CHR91]. There, an axiom system is given
for proving duration properties of real-time systems.
Here we address the algorithmic verification problem for duration properties of real-time sys-
tems. We use the formalism of timed automata [AD94] for representing real-time systems. A timed
automaton operates with a finite control and a finite number of fictitious time gauges called clocks,
which allow the annotation of the control graph with timing constraints. The state of a timed
automaton includes, apart from the location of the control, also the real-numbered values of all
clocks. Consequently, the state space of a timed automaton is infinite, and this complicates its
analysis. The basic question about a timed automaton is the following time-bounded reachability
problem:
Given an initial state oe, a final state - , and an interval I , is there a run of the automaton
starting in state oe and ending in state - such that the total elapsed time of the run is
in the interval I? (y)
The solution to this problem relies on a partition of the infinite state space into finitely many regions,
which are connected with transition and time edges to form the region graph of the timed automaton
[AD94]. The states within a region are equivalent with respect to many standard questions. In
particular, the region graph can be used for testing the emptiness of a timed automaton [AD94], for
checking time-bounded branching properties [ACD93], for testing the bisimilarity of states [Cer92],
and for computing lower and upper bounds on time delays [CY91]. Unfortunately, the region graph
is not adequate for checking duration properties such as the duration-bounded causality property
( ); that is, of two runs that start in different states within the same region, one may satisfy the
duration-bounded causality property, whereas the other one does not. Hence a new technique is
needed for analyzing duration properties.
To introduce the concept of durations in a timed automaton, we associate with every finite
run a nonnegative real number, which is called the duration of the run. The duration of a run is
defined inductively using a duration measure, which is a function that maps the control locations
to nonnegative integers: the duration of an empty run is 0; and the duration measure of a location
gives the rate at which the duration of a run increases while the automaton control resides in
that location. For example, a duration measure of 0 means that the duration of the run stays
unchanged (i.e., the time spent in the location is not accumulated), a duration measure of 1 means
that the duration of the run increases at the rate of time (i.e., the time spent in the location
is accumulated), and a duration measure of 2 means that the duration of the run increases at
twice the rate of time. The time-bounded reachability problem (y) can now be generalized to the
duration-bounded reachability problem:
Given an initial state oe, a final state - , a duration measure, and an interval I , is there
a run of the automaton starting in state oe and ending in state - such that the duration
of the run is in the interval I?
We show that the duration-bounded reachability problem is Pspace-complete, and we provide an
optimal solution. Our algorithm can be used to verify duration properties of real-time systems that
are modeled as timed automata, such as the duration-bounded causality property ( ).
Let us briefly outline our construction. Given a region R, a final state - , and a path in the
region graph from R to - , we show that the lower and upper bounds on the durations of all runs
that start at some state in R and follow the chosen path can be written as linear expressions over
the variables that represent the clock values of the start state. In a first step, we provide a recipe
for computing these so-called bound expressions. In the next step, we define an infinite graph,
the bounds graph, whose vertices are regions tagged with bound expressions that specify the set of
possible durations for any path to the final state. In the final step, we show that the infinite bounds
graph can be collapsed into a finite graph for solving the duration-bounded reachability problem.
2 The Duration-bounded Reachability Problem
Timed automata
Timed automata are a formal model for real-time systems [Dil89, AD94]. Each automaton has a
finite set of control locations and a finite set of real-valued clocks. All clocks proceed at the same
rate, and thus each clock measures the amount of time that has elapsed since it was started. A
transition of a timed automaton can be taken only if the current clock values satisfy the constraint
that is associated with the transition. When taken, the transition changes the control location of
the automaton and restarts one of the clocks.
Formally, a timed automaton A is a triple (S; X; E) with the following components:
ffl S is a finite set of locations ;
ffl X is a finite set of clocks ;
ffl E is a finite set of transitions of the form (s; t; '; x), for a source location s 2 S, a target
location t 2 S, a clock constraint ', and a clock x 2 X . Each clock constraint is a positive
boolean combination of atomic formulas of the form y - k or y ! k or k - y or k ! y, for a
clock y 2 X and a nonnegative integer constant k 2 N.
A configuration of the timed automaton A is fully described by specifying the location of the control
and the values of all clocks. A clock valuation c 2 R X is an assignment of nonnegative reals to the
clocks in X . A state oe of A is a pair (s; c) consisting of a location s 2 S and a clock valuation c.
We write \Sigma for the (infinite) set of states of A. As time elapses, the values of all clocks increase
uniformly with time, thereby changing the state of A. Thus, if the state of A is (s; c), then after
time assuming that no transition occurs, the state of A is (s; c is the
clock valuation that assigns c(x) + ffi to each clock x. The state of A may also change because of
a transition (s; t; '; x) in E. Such a transition can be taken only in a state whose location is s
and whose clock valuation satisfies the constraint '. The transition is instantaneous. After the
transition, the automaton is in a state with location t and the new clock valuation is c[x := 0]; that
is, the clock x associated with the transition is reset to the value 0, and all other clocks remain
unchanged.
The possible behaviors of the timed automaton A are defined through a successor relation on
the states of A:
Transition successor For all states (s; c) 2 \Sigma and transitions (s; t; '; x) 2 E, if c satisfies ', then
(s; c) 0
Time successor For all states (s; c) 2 \Sigma and time increments
A state (t; d) is a successor of the state (s; c), written (s; c) ! (t; d), iff there exists a nonnegative
real ffi such that (s; c) ffi
d). The successor relation defines an infinite graph K(A) on the state
space \Sigma of A. The transitive closure ! of the successor relation ! is called the reachability
relation of A.
s
(y - 2; y)
Figure
1: Sample timed automaton
Example 1 A sample timed automaton is shown in Figure 1. The automaton has four locations
and two clocks. Each edge is labeled with a clock constraint and the clock to be reset. A state of
the automaton contains a location and real-numbered values for the clocks x and y. Some sample
pairs in the successor relation are shown below:
(s; 0;
0):Depending on the application, a timed automaton may be augmented with additional components
such as initial locations, accepting locations, transition labels for synchronization with other timed
automata, and atomic propositions as location labels. It is also useful to label each location with a
clock constraint that limits the amount of time spent in that location [HNSY94]. We have chosen a
very simple definition of timed automata to illustrate the essential computational aspects of solving
reachability problems. Also, the standard definition of a timed automaton allows a (possibly empty)
set of clocks to be reset with each transition. Our requirement that precisely one clock is reset with
each transition does not affect the expressiveness of timed automata.
Clock regions and the region graph
Let us review the standard method for analyzing timed automata. The key to solving many
verification problems for a timed automaton is the construction of the so-called region graph [AD94].
The region graph of a timed automaton is a finite quotient of the infinite state graph that retains
enough information for answering certain reachability questions.
Suppose that we are given a timed automaton A and an equivalence relation - = on the states
\Sigma of A. For oe 2 \Sigma, we write for the equivalence class of states that contains the state
oe. The successor relation ! is extended to -equivalence classes as follows: define
there is a state oe 0 2 nonnegative real ffi such that oe 0 ffi
nonnegative reals " ! ffi, we have (oe The quotient graph of A with respect to
the equivalence relation - =, written is a graph whose vertices are the -equivalence classes
and whose edges are given by the successor relation ! . The equivalence relation - = is stable iff
there is a state - 0 2
is back stable iff whenever oe ! - , then for all states - there is a state oe 0 2
. The quotient graph with respect to a (back) stable equivalence relation can be used for
solving the reachability problem between equivalence classes: given two -equivalence classes R 0
and R f , is there a state oe 2 R 0 and a state - 2 R f such that oe ! -? If the equivalence relation
is (back) stable, then the answer to the reachability problem is affirmative iff there is a path from
R 0 to R f in the quotient graph
The region graph of the timed automaton A is a quotient graph of A with respect to the
equivalence relation defined below. For x 2 X , let m x be the largest constant that the
clock x is compared to in any clock constraint of A. For denote the integral part of ffi,
and let -
denote the fractional part of ffi (thus,
We freely use constraints like -
for a clock x and a nonnegative integer constant k (e.g., a clock valuation c satisfies the
constraint bxc - k iff bc(x)c - k). Two states (s; c) and (t; d) of A are region-equivalent, written
(s; c) - (t; d), iff the following four conditions hold:
1.
2. for each clock x 2 X , either
3. for all clocks x; y 2 X , the valuation c satisfies -
the valuation d satisfies -
4. for each clock x 2 X , the valuation c satisfies - the valuation d satisfies -
A (clock) region R ' \Sigma is a -equivalence class of states. Hence, a region is fully specified by
a location, the integral parts of all clock values, and the ordering of the fractional parts of the
clock values. For instance, if X contains three clocks, x, y, and z, then the region
contains all states (s; c) such that c satisfies
z ! 1. For the region R, we write [s;
z], and we say
that R has the location s and satisfies the constraints etc. There are only finitely many
regions, because the exact value of the integral part of a clock x is recorded only if it is smaller than
. The number of regions is bounded by jSj is the number
of clocks. The region graph R(A) of the timed automaton A is the (finite) quotient graph of A
with respect to the region equivalence relation -. The region equivalence relation - is stable as
well as back-stable [AD94]. Hence the region graph can be used for solving reachability problems
between regions, and also for solving time-bounded reachability problems [ACD93].
It is useful to define the edges of the region graph explicitly. A region R is a boundary region
iff there is some clock x such that R satisfies the constraint -
region that is not a boundary
region is called an open region. For a boundary region R, we define its predecessor region pred(R)
to be the open region Q such that for all states (s; c) 2 Q, there is a time increment ffi 2 R such
that (s; c and for all nonnegative reals Similarly, we define
the successor region succ(R) of R to be the open region Q 0 such that for all states (s; c) 2 Q
there is a time increment ffi 2 R such that (s; and for all nonnegative reals
we have (s; . The state of a timed automaton belongs to a boundary region R only
instantaneously. Just before that instant the state belongs to pred(R), and just after that instant
the state belongs to succ(R). For example, for the boundary region R given by
pred(R) is the open region
and succ(R) is the open region
z]:
The edges of the region graph R(A) fall into two categories:
Transition edges If (s; c) 0
then there is an edge from the region [s; c] - to the region [t; d] - .
Time edges For each boundary region R, there is an edge from pred(R) to R, and an edge from
R to succ(R).
In addition, each region has a self-loop, which can be ignored for solving reachability problems.
Duration measures and duration-bounded reachability
A duration measure for the timed automaton A is a function p from the locations of A to the
nonnegative integers. A duration constraint for A is of the form
R
is a duration
measure for A and I is a bounded interval of the nonnegative real line whose endpoints are integers
may be open, half-open, or closed).
Let p be a duration measure for A. We extend the state space of A to evaluate the integral
R
along the runs of A. An extended state of A is a pair (oe; ") consisting of a state oe of A and a
nonnegative real number ". The successor relation on states is extended as follows:
Transition successor For all extended states (s; c; ") and all transitions (s; t; '; x) such that
c satisfies ', define (s; c; '') 0
Time successor For all extended states (s; c; ") and all time increments
(s; c; ") ffi
We consider the duration-bounded reachability problem between regions: given two regions R 0 and
R f of a timed automaton A, and a duration constraint
R
I for A, is there a state oe 2 R 0 , a state
nonnegative real I such that (oe; We refer to this duration-bounded
reachability problem using the tuple
R
Example 2 Recall the sample timed automaton from Figure 1. Suppose that the duration measure
p is defined by 1. Let the initial region R 0 be the singleton
let the final region R f be f(s; 0)g. For the duration constraint
R
the answer to the duration-bounded reachability problem is in the affirmative, and the
following sequence of successor pairs is a justification (the last component denotes the value of the
integral R
(s; 0; 0;
On the other hand, for the duration constraint R
the answer to the duration-bounded
reachability problem is negative. The reader can verify that if (s; 0; 0;
2. 2
If the duration measure p is the constant function 1 (i.e., locations s), then
the integral
R
measures the total elapsed time, and the duration-bounded reachability problem
between regions is called a time-bounded reachability problem. In this case, if (oe;
some I , then for all states oe 0 2 [oe] - there is a state - 0 2 [- and a real number I such that
Hence, the region graph suffices to solve the time-bounded reachability problem.
This, however, is not true for general duration measures.
3 A Solution to the Duration-bounded Reachability Problem
Bound-labeled regions and the bounds graph
Consider a timed automaton A, two regions R 0 and R f , and a duration measure p. We determine
the set I of possible values of ffi such that (oe; To compute
the lower and upper bounds on the integral
R
along a path of the region graph, we refine the graph
by labeling all regions with expressions that specify the extremal values of the integral.
We define an infinite graph with vertices of the form (R; L; l; U; u), where R is a region, L and
U are linear expressions over the clock variables, and l and u are boolean values. The intended
meaning of the bound expressions L and U is that in moving from a state (s; c) 2 R to a state
in the final region R f , the set of possible values of the integral
R
p has the infimum L and the
supremum U , both of which are functions of the current clock values c. If the bit l is 0, then the
infimum L is included in the set of possible values of the integral, and if l is 1, then L is excluded.
Similarly, if the bit u is 0, then the supremum U is included in the set of possible values of R
and if u is 1, then U is excluded. For example, if then the left-closed right-open
interval [L; U) gives the possible values of the integral R
p.
The bound expressions L and U associated with the region R have a special form. Suppose
that is the set of clocks and that for all states (s; c) 2 R, the clock valuation c
is the clock with the smallest fractional part in R, and
x n is the clock with the largest fractional part. The fractional parts of all n clocks partition the
unit interval into represented by the expressions e
x
x n .
A bound expression for R is a positive linear combination of the expressions e that is, a
bound expression for R has the form a 0 are nonnegative integer
constants. We denote bound expressions by (n + 1)-tuples of coefficients and write (a
the bound expression a . For a bound expression e and a clock valuation c,
we to denote the result of evaluating e using the clock values given by c. When time
advances, the value of a bound expression changes at the rate a 0 \Gamma a n . If the region R satisfies
the constraint -
is a boundary region), then the coefficient a 0 is irrelevant, and if R
then the coefficient a i is irrelevant. Henceforth, we assume that all irrelevant
coefficients are set to 0.
A bound-labeled region (R; L; l; U; u) of the timed automaton A consists of a clock region R of
A, two bound expressions L and U for R, and two bits l; u 2 f0; 1g. We construct B p;R f
(A), the
bounds graph of A for the duration measure p and the final region R f . The vertices of B p;R f (A) are
the bound-labeled regions of A and the special vertex R f , which has no outgoing edges. We first
define the edges with the target R f , and then the edges between bound-labeled regions.
The edges with the target R f correspond to runs of the automaton that reach a state in R f
without passing through other regions. Suppose that R f is an open region with the duration
measure a (i.e., a for the location s of R f ). The final region R f is reachable from a state
(s; c) 2 R f by remaining in R f for at least 0 and at most
units. Since the integral
R
p increases at the rate a, the lower bound on the integral value over all states (s; c) 2 R f is 0,
z -
x
a 1 a 2 a 3 a 1 a 2 a 3 a 1 + a
Figure
2:
and the upper bound is a
While the lower bound 0 is a possible value of the integral, if
a ? 0, then the upper bound is only a supremum of all possible values. Hence, we add an edge in
the bounds graph to R f from (R f ; L; 0; U; u) for
if
If R f is a boundary region, no time can be spent in R f , and both bounds are 0. In this case, we
add an edge to R f from (R f ; L; 0; U;
Now let us look at paths that reach the final region R f by passing through other regions. For
each edge from R to R 0 in the region graph R(A), the bounds graph B p;R f (A) has exactly one edge
to each bound-labeled region of the form (R bound-labeled region of the
form (R; L; l; U; u). First, let us consider an example to understand the determination of the lower
bound L and the corresponding bit l (the upper bound U and the bit u are determined similarly).
Suppose that and that the boundary region R 1 , which satisfies
labeled with the lower bound L and the bit l 1 . This means that starting from a
state (s; c) 2 R 1 , the lower bound on the integral R
reaching some state in R f is
Consider the open predecessor region R 2 of R 1 , which satisfies
x. Let a be the duration
measure of R 2 . There is a time edge from R 2 to R 1 in the region graph. We want to compute the
lower-bound label L 2 for R 2 from the lower-bound label L 1 of R 1 . Starting in a state (s; c) 2 R 2 ,
the state (s; c reached after time
Furthermore, from the state (s; c) 2 R 2 the integral R
p has the value [[a
before entering
the region R 1 . Hence, the new lower bound is
and the label L 2 is (a 1 ; a 2 ; a 3 ; a 1 +a). See Figure 2. Whether the lower bound L 2 is a possible value
of the integral depends on whether the original lower bound L 1 is a possible value of the integral
starting in R 1 . Thus, the bit l 2 labeling R 2 is the same as the bit l 1 labeling R 1 .
Next, consider the boundary region R 3 such that R 2 is the successor region of R 3 . The region
x, and there is a time edge from R 3 to R 2 in the region graph. The reader
can verify that the updated lower-bound label L 3 of R 3 is the same as L 2 , namely (a 1 ; a 2 ; a 3 ; a 1 +a),
which can be simplified to (0; a region. See Figure 3. The
updated bit l 3 of R 3 is the same as l 2 .
z - x
y
a 1 a 2 a 1 + a
a 3 a 2 a 3 a 1 + a
Figure
3: (0 ! -
z -
x
immediate:
delayed:
a 2 a 3 a 1 + a
a 2
a 2
a 3 a 3
a 3 a 3
Figure
4:
The process repeats if we consider further time edges, so let us consider a transition edge from
region R 4 to region R 3 , which resets the clock y. We assume that the region R 4 is open with
duration measure b, and that R 4 satisfies
x. Consider a state (t; d) 2 R 4 . Suppose
that the transition happens after time ffi; then
. If the state after the transition is
(s; c) 2 R 3 , then
ffi. The lower bound L 4 corresponding
to this scenario is the value of the integral before the transition, which is b \Delta ffi, added to the value
of the lower bound L 3 at the state (s; c), which is
z
To obtain the value of the lower bound L 4 at the state (t; d), we need to compute the infimum over
all choices of ffi , for
. Hence, the desired lower bound is
z
After substituting
simplifies to
z
The infimum of the monotonic function in ffi is reached at one of the two extreme points. When
(i.e., the transition occurs immediately), then the value of L 4 at (t; d) is
z
When
d (i.e., the transition occurs as late as possible), then the value of L 4 at
(t; d) is
z
y), the lower-bound label L 4 for R 4
is (a 2 ; a 3 ; a 3 ; a 4 ), where a 4 is the minimum of a 1 + a and a 2 Figure 4. Finally, we need to
x
a 2 a 3 a 3 a 4
a 2 a 3 a 4
Figure
5:
deduce the bit l 4 , which indicates whether the lower bound L 4 is a possible value of the integral.
If a 1 then the lower bound is obtained with possible for R 4 iff L 3
is possible for R 3 ; so l 4 is the same as l 3 . Otherwise, if a 1 then the lower bound is
obtained with ffi approaching
d , and L 4 is possible iff both l 3 is possible for R 3 ; so
l
We now formally define the edges between bound-labeled regions of the bounds graph B p;R f
(A).
Suppose that the region graph R(A) has an edge from R to R 0 , and let a be the duration measure
of R. Then the bounds graph has an edge from (R; L; l; U; u) to (R iff the bound
expressions
and the bits l, u, l 0 , and u 0 are related as follows. There are various cases to consider, depending
on whether the edge from R to R 0 is a time edge or a transition edge:
Time edge 1 R 0 is a boundary region and is an open region: let 1 - k - n be the
largest index such that R 0 satisfies - x
for all we have a
i+k and b
for all
a
Time edge 2 R is a boundary region and R is an open region:
a
for all
Transition edge 1 R 0 is a boundary region, R is an open region, and the clock with the k-th
smallest fractional part in R is reset:
for all we have a
if a 0
a then a
if a 0
a and a ? 0 then
a and a ? 0 then
Transition edge 2 Both R and R 0 are boundary regions, and the clock with the k-th smallest
fractional part in R is reset:
for all we have a
for all k - i - n, we have a
This case is illustrated in Figure 5.
This completes the definition of the bounds graph B p;R f (A).
Reachability in the bounds graph
Given a state oe = (s; c), two bound expressions L and U , and two bits l and u, we define the
(bounded) interval I(oe; L; l; U; u) of the nonnegative real line as follows: the left endpoint is
the right endpoint is [[U then the interval is left-closed, else it is left-open; if
then the interval is right-closed, else it is right-open. The following lemma states the fundamental
property of the bounds graph B p;R f (A).
A be a timed automaton, let p be a duration measure for A, and let R f be a region
of A. For every state oe of A and every nonnegative real ffi , there is a state - 2 R f such that
in the bounds graph B p;R f
(A), there is path to R f from a bound-labeled region
(R;
Proof. Consider a state oe of A and a nonnegative real ffi. Suppose (oe;
Then, by the definition of the region graph R(A), we have a sequence
of successors of extended states with oe
region graph contains an edge from the region R i+1 containing oe i+1 to the region R i containing
oe i . We claim that there exist bound-labeled regions such that (1) for all
the region component of B i is R i , (2) the bounds graph B p;R f (A) has an edge from B 0 to R f and
from B i+1 to B i for all
This claim is proved by induction on i, using the definition of the edges in
the bounds graph.
Conversely, consider a sequence of bound-labeled regions B such that the bounds graph
has an edge from B 0 to R f and from B i+1 to B i for all
(R We claim that for all
there exists - 2 R f with (oe; This is again proved by induction on i, using the definition
of the edges in the bounds graph. 2
For a bound-labeled region denote the union S
oe2R I(oe; L; l; U; u) of
intervals. It is not difficult to check that the set I(B) is a bounded interval of the nonnegative real
line with integer endpoints. The left endpoint ' of I(B) is the infimum of all choices of
clock valuations c that are consistent with R; that is, Rg. Since all irrelevant
coefficients in the bound expression L are 0, the infimum ' is equal to the smallest nonzero coefficient
in L (the left end-point is 0 if all coefficients are 0). Similarly, the right endpoint of I(B) is the
supremum of [[U all choices of c that are consistent with R, and this supremum is equal
to the largest coefficient in U . The type of the interval I(B) can be determined as follows. Let
ffl If a then I(B) is left-closed, and otherwise I(B) is left-open.
then I(B) is right-closed, and otherwise I(B) is
right-open.
For instance, consider the region R that satisfies
z. Let
is the open interval (1; 5), irrespective of the values of l and u.
A be a timed automaton, let
R
I be a duration constraint for A, and let R 0
be two regions of A. There are two states oe 2 R 0 and - 2 R f and a real number I such that
in the bounds graph B p;R f
(A), there is path to R f from a bound-labeled region B
with region component R 0 and I(B) " I 6= ;.
Hence, to solve the duration-bounded reachability problem
R
we construct the
portion of the bounds graph B p;R f (A) from which the special vertex R f is reachable. This can be
done in a backward breadth-first fashion starting from the final region R f . On a particular path
through the bounds graph, the same region may appear with different bound expressions. Although
there are infinitely many distinct bound expressions, the backward search can be terminated within
finitely many steps, because when the coefficients of the bound expressions become sufficiently large
relative to I , then their actual values become irrelevant. This is shown in the following section.
Collapsing the bounds graph
Given a nonnegative integer constant m, we define an equivalence relation - =m over bound-labeled
regions as follows. For two nonnegative integers a and b, define a - =m b iff either a = b, or both
m. For two bound expressions e = (a
iff for all
. For two bound-labeled regions
iff the following four conditions hold:
2.
3. either l some coefficient in L 1 is greater than m;
4. either some coefficient in U 1 is greater than m.
The following lemma states that the equivalence relation - =m on bound-labeled regions is back
stable.
Lemma 3 If the bounds graph B p;R f (A) contains an edge from a bound-labeled region B 1 to a bound-
labeled region B 0
, then there exists a bound-labeled region B 2 such that B 1
and the bounds graph contains an edge from B 2 to B 0
.
Proof. Consider two bound-labeled regions B 0
2 such that B 0- =m B 0
. Let R 0 be the clock
region of B 0
1 and of B 0
R be a
clock region such that the region graph R(A) has an edge from R to R 0 . Then there is a unique
bound-labeled region such that the bounds graph B p;R f
(A) has an edge
from B 1 to B 0
1 , and there is a unique bound-labeled region such that the
bounds graph has an edge from B 2 to B 0
2 . It remains to be shown that B 1
There are 4 cases to consider according to the rules for edges of the bounds graph. We consider
only the case corresponding to Transition edge 2. This corresponds to the case when R 0 is a
boundary region, R is an open region, and the clock with the k-th smallest fractional part in R is
reset. Let the duration measure be a in R. We will establish that L 1
some coefficient in L 1 is greater than m; the case of upper bounds is similar.
According to the
rule, for all
2 , we have a 0
It follows that for
We have
a
a). We have 4 cases to consider. (i) a
n and
n . Since a 0
n , we have a n
. In this case, l
1 and l
2 . If l 0
2 , we have
boundary region). Each
coefficient a 0
or a j , and thus some coefficient of L 1 also exceeds
m. (ii) a
a. In this case, we have a 0
It follows that all the values a 0
exceed m. Hence, a
and b n ? m. Since at least one coefficient of L 1 is at least m, there is no requirement that l
(indeed, they can be different). The cases (iii) a
n , and (iv) a
a and
a have similar analysis. 2
Since the equivalence relation - =m is back stable, for checking reachability between bound-labeled
regions in the bounds graph B p;R f (A), it suffices to look at the quotient graph [B p;R f
(A)]- =m . The
following lemma indicates a suitable choice for the constant m for solving a duration-bounded
reachability problem.
Lemma 4 Consider two bound-labeled regions B 1 and B 2 and a bounded interval I ' R with integer
endpoints. If B 1
for the right endpoint m of I, then I "
Proof. Consider bound-labeled regions
that
. It is easy to check that the left end-points of I(B 1 ) and I(B 2 ) are either equal or
both exceed m (see the rules for determining the left end-point). We need to show that when the
left end-points are at most m, either both I(B 1 ) and I(B 2 ) are left-open or both are left-closed. If
this is trivially true. Suppose l 1 we know that some coefficient of L 1 and of L 2
exceeds m. Since the left end-point is m or smaller, we know that both L 1 and L 2 have at least
two nonzero coefficients. In this case, both the intervals are left-open irrespective of the bits l 1 and
l 2 . A similar analysis of right end-points shows that either both the right end-points exceed m, or
both are at most m, are equal, and both the intervals are either right-open or right-closed. 2
A bound expression e is m-constrained, for a nonnegative integer m, iff all coefficients in e are
at most m + 1. Clearly, for every bound expression e, there exists a unique m-constrained bound
expression fl(e) such that e - =m fl(e). A bound-labeled region m-constrained
iff (1) both L and U are m-constrained, (2) if some coefficient of L is m+ 1, then l = 0, and (3) if
some coefficient of U is m for every bound-labeled region B, there exists
a unique m-constrained bound-labeled region fl(B) such that B - =m fl(B). Since no two distinct
m-constrained bound-labeled regions are - =m -equivalent, it follows that every - =m -equivalence class
contains precisely one m-constrained bound-labeled region. We use the m-constrained bound-
labeled regions to represent the - =m -equivalence classes.
The number of m-constrained expressions over n clocks is (m+2) n+1 . Hence, for a given region
R, the number of m-constrained bound-labeled regions of the form (R; L; l; U; u) is 4 \Delta (m+2) 2(n+1) .
From the bound on the number of clock regions, we obtain a bound on the number of m-constrained
bound-labeled regions of A, or equivalently, on the number of - =m -equivalence classes of bound-
labeled regions.
Lemma 5 Let A be a timed automaton with location set S and clock set X such that n is the number
of clocks, and no clock x is compared to a constant larger than m x . For every nonnegative integer
m, the number of m-constrained bound-labeled regions of A is at most
Consider the duration-bounded reachability problem
R
be the
right endpoint of the interval I . By Lemma 5, the number of m-constrained bound-labeled regions
is exponential in the length of the problem description. By combining Lemmas 2, 3, and 4, we
obtain the following exponential-time decision procedure for solving the given duration-bounded
reachability problem.
Theorem be the right endpoint of the interval I ' R. The answer to the duration-
bounded reachability problem
R
affirmative iff there exists a finite sequence
of m-constrained bound-labeled regions of A such that
1. the bounds graph B p;R f
(A) contains an edge to R f from some bound-labeled region B with
2. for all the bounds graph B p;R f
(A) contains an edge to B i from some bound-labeled
region B with
3. and the clock region of B k is R 0 .
Hence, when constructing, in a backward breadth-first fashion, the portion of the bounds graph
(A) from which the special vertex R f is reachable, we need to explore only m-constrained
bound-labeled regions. For each m-constrained bound-labeled region B i , we first construct all
predecessors of B i . The number of predecessors of B i is finite, and corresponds to the number
of predecessors of the clock region of B i in the region graph R(A). Each predecessor B of B i
that is not an m-constrained bound-labeled region is replaced by the - =m -equivalent m-constrained
region fl(B). The duration-bounded reachability property holds if a bound-labeled region B with
found. If the search terminates otherwise, by generating no new m-constrained
bound-labeled regions, then the answer to the duration-bounded reachability problem is negative.
The time complexity of the search is proportional to the number of m-constrained bound-labeled
regions, which is given in Lemma 5. The space complexity of the search is Pspace, because the
representation of an m-constrained bound-labeled region and its predecessor computation requires
only space polynomial in the length of the problem description.
Corollary 1 The duration-bounded reachability problem for timed automata can be decided in
Pspace.
The duration-bounded reachability problem for timed automata is Pspace-hard, because already
the (unbounded) reachability problem between clock regions is Pspace-hard [AD94].
We solved the duration-bounded reachability problem between two specified clock regions. Our
construction can be used for solving many related problems. First, it should be clear that the
initial and/or final region can be replaced either by a specific state with rational clock values, or
by a specific location (i.e., a set of clock regions). For instance, suppose that we are given an
initial state oe, a final state - , a duration constraint
R
I , and we are asked to decide whether
I . Assuming oe and - assign rational values to all clocks,
we can choose an appropriate time unit so that the regions [oe] - and [- are singletons. It follows
that the duration-bounded reachability problem between rational states is also solvable in Pspace.
A second example of a duration property we can decide is the following. Given a real-time
system modeled as a timed automaton, and nonnegative integers m, a, and b, we sometimes want
to verify that in every time interval of length m, the system spends at least a and at most b
accumulated time units in a given set of locations. For instance, for a railroad crossing similar to
the one that appears in various papers on real-time verification [AHH96], our algorithm can be
used to check that "in every interval of 1 hour, the gate is closed for at most 5 minutes." The
verification of this duration property, which depends on various gate delays and on the minimum
separation time between consecutive trains, requires the accumulation of the time during which the
gate is closed.
As a third, and final, example, recall the duration-bounded causality property ( ) from the
introduction. Assume that each location of the timed automaton is labeled with atomic propositions
such as q, denoting that the ringer is pressed, and r, denoting the response. The duration measure
is defined so that is a label of s, and otherwise. The labeling of the locations
with atomic propositions is extended to regions and bound-labeled regions. The desired duration-
bounded causality property, then, does not hold iff there is an initial region R 0 , a final region R f
labeled with r, and a bound-labeled region (R ;, and in
the bounds graph B p;R f , there is a path from B to R f that passes only through regions that are
not labeled with r.
The duration-bounded reachability problem has been studied, independently, in [KPSY93] also.
The approach taken there is quite different from ours. First, the problem is solved in the case of
discrete time, where all transitions of a timed automaton occur at integer time values. Next, it is
shown that the cases of discrete (integer-valued) time and dense (real-valued) time have the same
answer, provided the following two conditions are met: (1) the clock constraints of timed automata
use only positive boolean combinations of non-strict inequalities (i.e., inequalities involving - and -
and (2) the duration constraint is one-sided (i.e., it has the form R
N). The first requirement ensures that the runs of a timed automaton are closed under
digitization (i.e., rounding of real-numbered transition times relative to an arbitrary, but fixed
fractional part ffl 2 [0; 1) [HMP92]). The second requirement rules out duration constraints such
as
R
R
3. The approach of proving that the discrete-time and the dense-time
answers to the duration-bounded reachability problem coincide gives a simpler solution than ours,
and it also admits duration measures that assign negative integers to some locations. However, both
requirements (1) and (2) are essential for this approach. We also note that for timed automata
with a single clock, [KPSY93] gives an algorithm for checking more complex duration constraints,
such as R
R
different duration measures p and p 0 .
Instead of equipping timed automata with duration measures, a more general approach extends
timed automata with variables that measure accumulated durations. Such variables, which are
called integrators or stop watches, may advance in any given location either with time derivative
1 (like a clock) or with time derivative 0 (not changing in value). Like clocks, integrators can be
reset with transitions of the automaton, and the constraints guarding the automaton transitions
can test integrator values. The reachability problem between the locations of a timed automaton
with integrators, however, is undecidable [ACH single integrator can
cause undecidability [HKPV95]. Still, in many cases of practical interest, the reachability problem
for timed automata with integrators can be answered by a symbolic execution of the automaton
In contrast to the use of integrators, whose real-numbered values are part of the automaton state,
we achieved decidability by separating duration constraints from the system and treating them as
properties. This distinction between strengthening the model and strengthening the specification
language with the duration constraints is essential for the decidability of the resulting verification
problem. The expressiveness of specification languages can be increased further. For example,
it is possible to define temporal logics with duration constraints or integrators. The decidability
of the model-checking problem for such logics remains an open problem. For model checking a
given formula, we need to compute the characteristic set, which contains the states that satisfy the
formula. In particular, given an initial region R 0 , a final state - , and a duration constraint
R
we need to compute the set Q 0 ' R 0 of states oe 2 R 0 for which there exists a real number
such that (oe; Each bound-labeled region (R u) from which R f is reachable in
the bounds graph B p;R f contributes the subset foe 2 R 0 j I(oe; L; l; U; u) " I 6= ;g to Q 0 . In general,
there are infinitely many such contributions, possibly all singletons, and we know of no description
of Q 0 that can be used to decide the model-checking problem. By contrast, over discrete time, the
characteristic sets for formulas with integrators can be computed [BES93]. Also, over dense time,
the characteristic sets can be approximated symbolically [AHH96].
Acknowledgements
. We thank Sergio Yovine for a careful reading of the manuscript.
--R
Model checking in dense real time.
A theory of timed automata.
The benefits of relaxing punctuality.
Logics and models of real time: a survey.
Automatic symbolic verification of embedded systems.
On model checking for real-time properties with durations
Design and synthesis of synchronization skeletons using branching-time temporal logic
Decidability of bisimulation equivalence for parallel timer processes.
A calculus of durations.
Minimum and maximum delay problems in real-time systems
Timing assumptions and verification of finite-state concurrent systems
What's decidable about hybrid automata?
What good are digital clocks?
Symbolic model checking for real-time systems
Integration graphs: a class of decidable hybrid systems.
Specification and verification of concurrent systems in CESAR.
--TR
--CTR
Nicolas Markey , Jean-Franois Raskin, Model checking restricted sets of timed paths, Theoretical Computer Science, v.358 n.2, p.273-292, 7 August 2006
Yasmina Abdeddam , Eugene Asarin , Oded Maler, Scheduling with timed automata, Theoretical Computer Science, v.354 n.2, p.272-300, 28 March 2006 | real-time systems;duration properties;formal verification;model checking |
262521 | Compile-Time Scheduling of Dynamic Constructs in Dataflow Program Graphs. | AbstractScheduling dataflow graphs onto processors consists of assigning actors to processors, ordering their execution within the processors, and specifying their firing time. While all scheduling decisions can be made at runtime, the overhead is excessive for most real systems. To reduce this overhead, compile-time decisions can be made for assigning and/or ordering actors on processors. Compile-time decisions are based on known profiles available for each actor at compile time. The profile of an actor is the information necessary for scheduling, such as the execution time and the communication patterns. However, a dynamic construct within a macro actor, such as a conditional and a data-dependent iteration, makes the profile of the actor unpredictable at compile time. For those constructs, we propose to assume some profile at compile-time and define a cost to be minimized when deciding on the profile under the assumption that the runtime statistics are available at compile-time. Our decisions on the profiles of dynamic constructs are shown to be optimal under some bold assumptions, and expected to be near-optimal in most cases. The proposed scheduling technique has been implemented as one of the rapid prototyping facilities in Ptolemy. This paper presents the preliminary results on the performance with synthetic examples. | Introduction
A D ataflow graph representation, either as a programming
language or as an intermediate representation
during compilation, is suitable for programming multiprocessors
because parallelism can be extracted automatically
from the representation [1], [2] Each node, or actor, in a
dataflow graph represents either an individual program instruction
or a group thereof to be executed according to
the precedence constraints represented by arcs, which also
represent the flow of data. A dataflow graph is usually
made hierarchical. In a hierarchical graph, an actor itself
may represent another dataflow graph: it is called a macro
actor.
Particularly, we define a data-dependent macro actor, or
data-dependent actor, as a macro actor of which the execution
sequence of the internal dataflow graph is data dependent
(cannot be predicted at compile time). Some examples
are macro actors that contain dynamic constructs such as
data-dependent iteration, and recursion. Actors
are said to be data-independent if not data-dependent.
The scheduling task consists of assigning actors to pro-
cessors, specifying the order in which actors are executed on
each processor, and specifying the time at which they are
S. Ha is with the Department of Computer Engineering, Seoul National
University, Seoul, 151-742, Korea. e-mail: sha@comp.snu.ac.kr
E. Lee is with the Department of Electrical Engineering and Computer
Science, University of California at Berkeley, Berkeley, CA
94720, USA. e-mail: eal@ohm.eecs.berkeley.edu
executed. These tasks can be performed either at compile
time or at run time [3]. In the fully-dynamic scheduling,
all scheduling decisions are made at run time. It has the
flexibility to balance the computational load of processors
in response to changing conditions in the program. In case
a program has a large amount of non-deterministic behav-
ior, any static assignment of actors may result in very poor
load balancing or poor scheduling performance. Then, the
fully dynamic scheduling would be desirable. However, the
run-time overhead may be excessive; for example it may
be necessary to monitor the computational loads of processors
and ship the program code between processors via
networks at run time. Furthermore, it is not usually practical
to make globally optimal scheduling decision at run
time.
In this paper, we focus on the applications with a moderate
amount of non-deterministic behavior such as DSP
applications and graphics applications. Then, the more
scheduling decisions are made at compile time the better
in order to reduce the implementation costs and to make
it possible to reliably meet any timing constraints.
While compile-time processor scheduling has a very rich
and distinguished history [4], [5], most efforts have been
focused on deterministic models: the execution time of
each actor T i on a processor P k is fixed and there are no
data-dependent actors in the program graph. Even in this
restricted domain of applications, algorithms that accomplish
an optimal scheduling have combinatorial complexity,
except in certain trivial cases. Therefore, good heuristic
methods have been developed over the years [4], [6], [7],
[8]. Also, most of the scheduling techniques are applied to
a completely expanded dataflow graph and assume that an
actor is assigned to a processor as an indivisible unit. It
is simpler, however, to treat a data-dependent actor as a
schedulable indivisible unit. Regarding a macro actor as
a schedulable unit greatly simplifies the scheduling task.
Prasanna et al [9] schedule the macro dataflow graphs hierarchically
to treat macro actors of matrix operations as
schedulable units. Then, a macro actor may be assigned to
more than one processor. Therefore, new scheduling techniques
to treat a macro actor as a schedulable unit was
devised.
Compile-time scheduling assumes that static information
about each actor is known. We define the profile of
an actor as the static information about the actor necessary
for a given scheduling technique. For example, if we
use a list scheduling technique, the profile of an actor is
simply the computation time of the actor on a processor.
The communication requirements of an actor with other
actors are included in the profile if the scheduling tech-
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 769
nique requires that information. The profile of a macro
actor would be the number of the assigned processors and
the local schedule of the actor on the assigned processors.
For a data-independent macro actor such as a matrix op-
eration, the profile is deterministic. However, the profile
of a data-dependent actor of dynamic construct cannot be
determined at compile time since the execution sequence
of the internal dataflow subgraph varies at run time. For
those constructs, we have to assume the profiles somehow
at compile-time.
The main purpose of this paper is to show how we can
define the profiles of dynamic constructs at compile-time.
A crucial assumption we rely on is that we can approximate
the runtime statistics of the dynamic behavior at compile-
time. Simulation may be a proper method to gather these
statistics if the program is to be run on an embedded DSP
system. Sometimes, the runtime statistics could be given
by the programmer for graphics applications or scientific
applications.
By optimally choosing the profile of the dynamic con-
structs, we will minimize the expected schedule length of a
program assuming the quasi-static scheduling. In figure 1,
actor A is a data-dependent actor. The scheduling result is
shown with a gantt chart, in which the horizontal axis indicates
the scheduling time and the vertical axis indicates
the processors. At compile time, the profile of actor A is
assumed. At run time, the schedule length of the program
varies depending on the actual behavior of actor A. Note
that the pattern of processor availability before actor B
starts execution is preserved at run time by inserting idle
time. Then, after actor A is executed, the remaining static
schedule can be followed. This scheduling strategy is called
quasi-static scheduling that was first proposed by Lee [10]
for DSP applications. The strict application of the quasi-static
scheduling requires that the synchronization between
actors is guaranteed at compile time so that no run-time
synchronization is necessary as long as the pattern of processor
availability is consistent with the scheduled one. It
is generally impractical to assume that the exact run-time
behaviors of actors are known at compile time. Therefore,
synchronization between actors is usually performed at run
time. In this case, it is not necessary to enforce the pattern
of processor availability by inserting idle time. Instead, idle
time will be inserted when synchronization is required to
execute actors. When the execution order of the actors is
not changed from the scheduled order, the actual schedule
length obtained from run-time synchronization is proven to
be not much different from what the quasi-static scheduling
would produce [3]. Hence, our optimality criterion for the
profile of dynamic constructs is based on the quasi-static
scheduling strategy, which makes analysis simpler.
II. Previous Work
All of the deterministic scheduling heuristics assume that
static information about the actors is known. But almost
none have addressed how to define the static information
of data-dependent actors. The pioneering work on this issue
was done by Martin and Estrin [11]. They calculated
A
A
A
(b)
(c) (d)
(a)
Fig. 1. (a) A dataflow graph consists of five actors among which actor
A is a data-dependent actor. (b) Gantt chart for compile-time
scheduling assuming a certain execution time for actor A. (c) At
run time, if actor A takes longer, the second processor is padded
with no-ops and (d) if actor A takes less, the first processor is
idled to make the pattern of processor availability same as the
scheduled one (dark line) in the quasi-static scheduling.
the mean path length from each actor to a dummy terminal
actor as the level of the actor for list scheduling. For exam-
ple, if there are two possible paths divided by a conditional
construct from an actor to the dummy terminal actor, the
level of the actor is a sum of the path lengths weighted by
the probability with which the path is taken. Thus, the
levels of actors are based on statistical distribution of dynamic
behavior of data-dependent actors. Since this is expensive
to compute, the mean execution times instead are
usually used as the static information of data-dependent
actors [12]. Even though the mean execution time seems a
reasonable choice, it is by no means optimal. In addition,
both approaches have the common drawback that a data-dependent
actor is assigned to a single processor, which is
a severe limitation for a multiprocessor system.
Two groups of researchers have proposed quasi-static
scheduling techniques independently: Lee [10] and Loeffler
et al [13]. They developed methods to schedule conditional
and data-dependent iteration constructs respectively. Both
approaches allow more than one processor to be assigned to
dynamic constructs. Figure 2 shows a conditional and compares
three scheduling methods. In figure 2 (b), the local
schedules of both branches are shown, where two branches
are scheduled on three processors while the total
number of processors is 4
In Lee's method, we overlap the local schedules of both
branches and choose the maximum termination for each
processor. For hard real-time systems, it is the proper
choice. Otherwise, it may be inefficient if either one branch
is more likely to be taken and the size of the likely branch
is much smaller. On the other hand, Loeffler takes the
local schedule of more likely branch as the profile of the
conditional. This strategy is inefficient if both branches
are equally likely to be taken and the size of the assumed
branch is much larger. Finally, a conditional evaluation can
be replaced with a conditional assignment to make the construct
static; the graph is modified as illustrated in figure
(c). In this scheme, both true and false branches are sched-
770 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
A
(a) if-then-else construct
(b) local schedule of two
(d) Lee's method (e) Leoffler's method
(c) a fully-static schedule
Fig. 2. Three different schedules of a conditional construct. (a) An
example of a conditional construct that forms a data-dependent
actor as a whole. (b) Local deterministic schedules of the two
branches. (c) A static schedule by modifying the graph to use
conditional assignment. (d) Lee's method to overlap the local
schedules of both branches and to choose the maximum for each
processor. (e) Loeffler's method to take the local schedule of the
branch which is more likely to be executed.
uled and the result from one branch is selected depending
on the control boolean. An immediate drawback is inefficiency
which becomes severe when one of the two branches
is not a small actor. Another problem occurs when the
unselected branch generates an exception condition such
as divide-by-zero error. All these methods on conditionals
are ad-hoc and not appropriate as a general solution.
Quasi-static scheduling is very effective for a data-dependent
iteration construct if the construct can make
effective use of all processors in each cycle of the iteration.
It schedules one iteration and pads with no-ops to make
the pattern of processor availability at the termination the
same as the pattern of the start (figure (Equivalently,
all processors are occupied for the same amount of time
in each iteration). Then, the pattern of processor availability
after the iteration construct is independent of the
number of iteration cycles. This scheme breaks down if the
construct cannot utilize all processors effectively.
one iteration cycle
Fig. 3. A quasi-static scheduling of a data-dependent iteration con-
struct. The pattern of processor availability is independent of the
number of iteration cycles.
The recursion construct has not yet been treated successfully
in any statically scheduled data flow paradigm.
Recently, a proper representation of the recursion construct
has been proposed [14]. But, it is not explained
how to schedule the recursion construct onto multiproces-
sors. With finite resources, careless exploitation of the parallelism
of the recursion construct may cause the system to
deadlock.
In summary, dynamic constructs such as conditionals,
data-dependent iterations, and recursions, have not been
treated properly in past scheduling efforts, either for static
scheduling or dynamic scheduling. Some ad-hoc methods
have been introduced but proven unsuitable as general so-
lutions. Our earlier result with data-dependent iteration [3]
demonstrated that a systematic approach to determine the
profile of data-dependent iteration actor could minimize
the expected schedule length. In this paper, we extend our
analysis to general dynamic constructs.
In the next section, we will show how dynamic constructs
are assigned their profiles at compile-time. We also prove
the given profiles are optimal under some unrealistic as-
sumptions. Our experiments enable us to expect that our
decisions are near-optimal in most cases. Section 4,5 and 6
contains an example with data-dependent iteration, recur-
sion, and conditionals respectively to show how the profiles
of dynamic constructs can be determined with known
runtime statistics. We implement our technique in the
Ptolemy framework [15]. The preliminary simulation results
will be discussed in section 7. Finally, we discuss the
limits of our method and mention the future work.
III. Compile-Time Profile of Dynamic
Constructs
Each actor should be assigned its compile-time profile
for static scheduling. Assuming a quasi-static scheduling
strategy, the proposed scheme is to decide the profile of a
construct so that the average schedule length is minimized
assuming that all actors except the dynamic construct are
data-independent. This objective is not suitable for a hard
real-time system as it does not bound the worst case be-
havior. We also assume that all dynamic constructs are
uncorrelated. With this assumption, we may isolate the effect
of each dynamic construct on the schedule length sep-
arately. In case there are inter-dependent actors, we may
group those actors as another macro actor, and decide the
optimal profile of the large actor. Even though the decision
of the profile of the new macro actor would be complicated
in this case, the approach is still valid. For nested dynamic
constructs, we apply the proposed scheme from the inner
dynamic construct first. For simplicity, all examples in this
paper will have only one dynamic construct in the dataflow
graph.
The run-time cost of an actor i, C i , is the sum of the total
computation time devoted to the actor and the idle time
due to the quasi-static scheduling strategy over all proces-
sors. In figure 1, the run-time cost of a data-dependent
actor A is the sum of the lightly (computation time) and
darkly shaded areas after actor A or C (immediate idle
time after the dynamic construct). The schedule length of
a certain iteration can be written as
schedule
where T is the total number of processors in the system,
and R is the rest of the computation including all idle time
that may result both within the schedule and at the end.
Therefore, we can minimize the expected schedule length
by minimizing the expected cost of the data-dependent acHA
AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 771
tor or dynamic construct if we assume that R is independent
of our decisions for the profile of actor i. This assumption
is unreasonable when precedence constraints make R
dependent on our choice of profile. Consider, for example,
a situation where the dynamic construct is always on the
critical path and there are more processors than we can
effectively use. Then, our decision on the profile of the
construct will directly affect the idle time at the end of
the schedule, which is included in R. On the other hand,
if there is enough parallelism to make effective use of the
unassigned processors and the execution times of all actors
are small relative to the schedule length, the assumption is
valid. Realistic situations are likely to fall between these
two extremes.
To select the optimal compile-time profile of actor i, we
assume that the statistics of the runtime behavior is known
at compile-time. The validity of this assumption varies to
large extent depending on the application. In digital signal
processing applications where a given program is repeatedly
executed with data stream, simulation can be useful
to obtain the necessary information. In general, however,
we may use a well-known distribution, for example uniform
or geometric distribution, which makes the analysis simple.
Using the statistical information, we choose the profile to
give the least expected cost at runtime as the compile-time
profile.
The profile of a data-dependent actor is a local schedule
which determines the number of assigned processors and
computation times taken on the assigned processors. The
overall algorithm of profile decision is as follows. We assume
that the dynamic behavior of actor i is expressed with
parameter k and its distribution p(k).
// T is the total number of processors.
// N is the number of processors assigned to the actor.
for
// A(N,k) is the actor cost with parameter N, k
// p(k) is the probability of parameter k
In the next section, we will illustrate the proposed
scheme with data-dependent iteration, recursion, and conditionals
respectively to show how profiles are decided with
runtime statistics.
IV. Data Dependent Iteration
In a data-dependent iteration, the number of iteration
cycles is determined at runtime and cannot be known at
compile-time. Two possible dataflow representations for
data-dependent iteration are shown in figure 4 [10].
The numbers adjacent to the arcs indicate the number
of tokens produced or consumed when an actor fires [2]. In
figure 4 (a), since the upsample actor produces M tokens
each time it fires, and the iteration body consumes only one
token when it fires, the iteration body must fire M times
for each firing of the upsample actor. In figure 4 (b), the
f
Iteration
body
source1
of M M
(a) (b)
Fig. 4. Data-dependent iteration can be represented using the either
of the dataflow graphs shown. The graph in (a) is used when the
number of iterations is known prior to the commencement of the
iteration, and (b) is used otherwise.
number of iterations need not be known prior to the commencement
of the iteration. Here, a token coming in from
above is routed through a "select" actor into the iteration
body. The "D" on the arc connected to the control input
of the "select" actor indicates an initial token on that arc
with value "false". This ensures that the data coming into
the "F" input will be consumed the first time the "select"
actor fires. After this first input token is consumed, the
control input to the "select" actor will have value "true"
until function t() indicates that the iteration is finished by
producing a token with value "false". During the itera-
tion, the output of the iteration function f() will be routed
around by the "switch" actor, again until the test function
t() produces a token with value "false". There are many
variations on these two basic models for data-dependent
iteration.
The previous work [3] considered a subset of data-dependent
iterations, in which simultaneous execution of
successive cycles is prohibited as in figure 4 (b). In figure 4
(a), there is no such restriction, unless the iteration body
itself contains a recurrence. Therefore, we generalize the
previous method to permit overlapped cycles when successive
iteration cycles are invokable before the completion of
an iteration cycle. Detection of the intercycle dependency
from a sequential language is the main task of the parallel
compiler to maximize the parallelism. A dataflow represen-
tation, however, reveals the dependency rather easily with
the presence of delay on a feedback arc.
We assume that the probability distribution of the number
of iteration cycles is known or can be approximated
at compile time. Let the number of iteration cycles be a
random variable I with known probability mass function
p(i). For simplicity, we set the minimum possible value of
I to be 0. We let the number of assigned processors be
N and the total number of processors be T . We assume a
blocked schedule as the local schedule of the iteration body
to remove the unnecessary complexity in all illustrations,
although the proposed technique can be applicable to the
overlap execution schedule [16]. In a blocked schedule, all
assigned processors are assumed to be available, or synchronized
at the beginning. Thus, the execution time of
one iteration cycle with N assigned processors is t N as displayed
in figure 5 (a). We denote by s N the time that must
772 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
elapse in one iteration before the next iteration is enabled.
This time could be zero, if there is no data dependency
between iterations. Given the local schedule of one iteration
cycle, we decide on the assumed number of iteration
cycles, xN , and the number of overlapped cycles kN . Once
the two parameters, xN and kN , are chosen, the profile of
the data-dependent iteration actor is determined as shown
in figure 5 (b). The subscript N of t N , s N , xN and kN
represents that they are functions of N , the number of the
assigned processors. For brevity, we will omit the subscript
N for the variables without confusion. Using this profile of
the data-dependent macro actor, global scheduling is performed
to make a hierarchical compilation. Note that the
pattern of processor availability after execution of the construct
is different from that before execution. We do not
address how to schedule the iteration body in this paper
since it is the standard problem of static scheduling.2 x2 x
next iteration cycle is executable
s
(a)
Fig. 5. (a) A blocked schedule of one iteration cycle of a data-dependent
iteration actor. A quasi-static schedule is constructed
using a fixed assumed number x of cycles in the iteration. The
cost of the actor is the sum of the dotted area (execution time)
and the dark area (idle time due to the iteration). There displays
3 possible cases depending on the actual number of cycles i in (b)
According to the quasi-static scheduling policy, three
cases can happen at runtime. If the actual number of cycles
coincides with the assumed number of iteration cycles,
the iteration actor causes no idle time and the cost of the
actor consists only of the execution time of the actor. Oth-
erwise, some of the assigned processors will be idled if the
iteration takes fewer than x cycles (figure 5 (c)), or else the
other processors as well will be idled (figure 5 (d)). The
expected cost of the iteration actor, C(N; k; x), is a sum
of the individual costs weighted by the probability mass
function of the number of iteration cycles. The expected
cost becomes
x
p(i)Ntx +X
(2)
By combining the first term with the first element of the
second term, this reduces to
e: (3)
Our method is to choose three parameters (N , k, and
x) in order to minimize the expected cost in equation (3).
First, we assume that N is fixed. Since C(N; k; x) is a decreasing
function of k with fixed N , we select the maximum
possible number for k. The number k is bounded by two
ratios: T
N and t
s . The latter constraint is necessary to avoid
any idle time between iteration cycles on a processor. As
a result, k is set to be
The next step is to determine the optimal x. If a value x is
optimal, the expected cost is not decreased if we vary x by
or \Gamma1. Therefore, we obtain the following inequalities,
Since t is positive, from inequality (5),X
If k is equal to 1, the above inequality becomes same as
inequality (5) in [3], which shows that the previous work is
a special case of this more general method.
Up to now, we decided the optimal value for x and k for
a given number N . How to choose optimal N is the next
question we have to answer. Since t is not a simple function
of N , no closed form for N minimizing C(N; k; x) exists,
unfortunately. However, we may use exhaustive search
through all possible values N and select the value minimizing
the cost in polynomial time. Moreover, our experiments
show that the search space for N is often reduced
significantly using some criteria.
Our experiments show that the method is relatively insensitive
to the approximated probability mass function for
Using some well-known distributions which have nice
mathematical properties for the approximation, we can reduce
the summation terms in (3) and (6) to closed forms.
Let us consider a geometric probability mass function with
parameter q as the approximated distribution of the number
of iteration cycles. This models a class of asymmetric
bell-shaped distributions. The geometric probability mass
function means that for any non-negative integer r,
To use inequality (6), we findX
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 773
Therefore, from the inequality (6), the optimal value of x
satisfies
Using floor notation, we can obtain the closed form for the
optimal value as follows:
Furthermore, equation (3) is simplified by using the factX
getting
Now, we have all simplified formulas for the optimal profile
of the iteration actor. Similar simplification is possible
also with uniform distributions [17]. If k equals to 1, our
results coincide with the previous result reported in [3].
V. Recursion
Recursion is a construct which instantiates itself as a part
of the computation if some termination condition is not sat-
isfied. Most high level programming languages support this
construct since it makes a program compact and easy to
understand. However, the number of self-instantiations,
called the depth of recursion, is usually not known at
compile-time since the termination condition is calculated
at run-time. In the dataflow paradigm, recursion can be
represented as a macro actor that contains a SELF actor
(figure 6). A SELF actor simply represents an instance of
a subgraph within which it sits.
If the recursion actor has only one SELF actor, the function
of the actor can be identically represented by a data-dependent
iteration actor as shown in figure 4 (b) in the
previous section. This includes as a special case all tail recursive
constructs. Accordingly, the scheduling decision for
the recursion actor will be same as that of the translated
data-dependent iteration actor. In a generalized recursion
construct, we may have more than one SELF actor. The
number of SELF actors in a recursion construct is called
the width of the recursion. In most real applications, the
width of the recursion is no more than two. A recursion
construct with width 2 and depth 2 is illustrated in figure 6
(b) and (c). We assume that all nodes of the same depth in
the computation tree have the same termination condition.
We will discuss the limitation of this assumption later. We
also assume that the run-time probability mass function of
the depth of the recursion is known or can be approximated
at compile-time.
The potential parallelism of the computation tree of
a generalized recursion construct may be huge, since all
nodes at the same depth can be executed concurrently.
The maximum degree of parallelism, however, is usually
not known at compile-time. When we exploit the parallelism
of the construct, we should consider the resource
limitations. We may have to restrict the parallelism in order
not to deadlock the system. Restricting the parallelism
in case the maximum degree of parallelism is too large has
been recognized as a difficult problem to be solved in a dynamic
dataflow system. Our approach proposes an efficient
solution by taking the degree of parallelism as an additional
component of the profile of the recursion construct.
Suppose that the width of the recursion construct is k.
Let the depth of the recursion be a random variable I with
known probability mass function p(i). We denote the degree
of parallelism by d, which means that the descendents
at depth d in the computation graph are assigned to different
processor groups. A descendent recursion construct
at depth d is called a ground construct (figure 7 (a)). If
we denote the size of each processor group by N , the total
number of processors devoted to the recursion becomes
Nk d . Then, the profile of a recursion construct is defined
by three parameters: the assumed depth of recursion x, the
degree of parallelism d, and the size of a processor group
N . Our approach optimizes the parameters to minimize
the expected cost of the recursion construct. An example
of the profile of a recursion construct is displayed in figure 7
(b).
Let - be the sum of the execution times of actors a,c,
and h in figure 6. And, let - o be the sum of the execution
times of actors a and b. Then, the schedule length l x of a
ground construct becomes
l
when x - d. At run time, some processors will be idled if
the actual depth of recursion is different from the assumed
depth of recursion, which is illustrated in figure 7 (c) and
(d). When the actual depth of recursion i is smaller than
the assumed depth x, the assigned processors are idled.
Otherwise, the other processors as well are idled. Let R be
the sum of the execution times of the recursion besides the
ground constructs. This basic cost R is equal to N-(k d \Gamma1)
.
For the runtime cost, C 1 , becomes
assuming that x is not less than d. For i ? x, the cost C 2
becomes
Therefore, the expected cost of the recursion construct,
d) is the sum of the run-time cost weighted by the
probability mass function.
x
774 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
a,c a,c
a,c
a a a a
a
test
f
(b) (c)
depth112
SELF actor
function f(x)
if test(x) is TRUE
else
return
(a)
return h(f(c1(x)),f(c2(x)));
c
Fig. 6. (a) An example of a recursion construct and (b) its dataflow representation. The SELF actor represents the recursive call. (c) The
computation tree of a recursion construct with two SELF actors when the depth of the recursion is two.
a,c a,c
a,c24
(b)
Nk d
l x
a,c
a,c a,c
construct
ground
(a)
a,c a,c
a,c
(c)
a,c a,c
a,c
(d)
Fig. 7. (a) The reduced computation graph of a recursion construct of width 2 when the degree of parallelism is 2. (b) The profile of the
recursion construct. The schedule length of the ground construct is a function of the assumed depth of recursion x and the degree of
parallelism d. A quasi-static schedule is constructed depending on the actual depth i of the recursion in (c) for i ! x and in (d) for i ? x.
After a few manipulations,
First, we assume that N is fixed. Since the expected
cost is a decreasing function of d, we select the maximum
possible number for d. The number d is bounded by the
processor constraint: Nk d - T . Since we assume that the
assumed depth of recursion x is greater than the degree of
parallelism d, the optimal value for d is
Next, we decide the optimal value for x from the observation
that if x is optimal, the expected cost is not decreased
when x is varied by +1 and \Gamma1. Therefore, we get
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 775
Rearranging the inequalities, we get the following,X
Nk d
Note the similarity of inequality (20) with that for data-dependent
iterations (6). In particular, if k is 1, the two
formulas are equivalent as expected. The optimal values
d and x depend on each other as shown in (18) and (20).
We may need to use iterative computations to obtain the
optimal values of d and x starting from
Let us consider an example in which the probability mass
function for the depth of the recursion is geometric with
parameter q. At each execution of depth i of the recursion,
we proceed to depth
to depth From the inequality
(20), the optimal x satisfies
Nk d
As a result, x becomes
Nk d
Up to now, we assume that N is fixed. Since - is a transcendental
function of N , the dependency of the expected
cost upon the size of a processor group N is not clear. In-
stead, we examine the all possible values for N , calculate
the expected cost from equation (3) and choose the optimal
N giving the minimum cost. The complexity of this
procedure is still polynomial and usually reduced significantly
since the search space of N can be reduced by some
criteria. In case of geometric distribution for the depth of
the recursion, the expected cost is simplified to
In case the number of child functions is one our
simplified formulas with a geometric distribution coincide
with those for data-dependent iterations, except for an
overhead term to detect the loop termination.
Recall that our analysis is based on the assumption that
all nodes of the same depth in the computation tree have
the same termination condition. This assumption roughly
approximates a more realistic assumption, which we call
the independence assumption, that all nodes of the same
depth have equal probability of terminating the recursion,
and that they are independent each other. This equal probability
is considered as the probability that all nodes of
the same depth terminate the recursion in our assumption.
The expected number of nodes at a certain depth is the
same in both assumptions even though they describe different
behaviors. Under the independence assumption, the
shape of the profile would be the same as shown in figure 7:
the degree of parallelism d is maximized. Moreover, all recursion
processors have the same schedule length for the
ground constructs. However, the optimal schedule length
l x of the ground construct would be different. The length
l x is proportional to the number of executions of the recursion
constructs inside a ground construct. This number can
be any integer under the independence assumptions, while
it belongs to a subset f0; our assumption.
Since the probability mass function for this number is likely
to be too complicated under the independence assumption,
we sacrifice performance by choosing a sub-optimal schedule
length under a simpler assumption.
VI. Conditionals
Decision making capability is an indispensable requirement
of a programming language for general applications,
and even for signal processing applications. A dataflow
representation for an if-then-else and the local schedules of
both branches are shown in figure 2 (a) and (b).
We assume that the probability p 1 with which the
"TRUE" branch (branch 1) is selected is known. The
"FALSE" branch (branch 2) is selected with probability
ij be the finishing time of the local schedule
of the i-th branch on the j-th processor. And let - t j
be the finishing time on the j-th processor in the optimal
profile of the conditional construct. We determine the optimal
values f - t j g to minimize the expected runtime cost of
the construct. When the i-th branch is selected, the cost
becomes
Therefore, the expected cost C(N) is
It is not feasible to obtain the closed form solutions for - t j
because the max function is non-linear and discontinuous.
Instead, a numerical algorithm is developed.
1. Initially, take the maximum finish time of both branch
schedules for each processor according to Lee's method
[10].
2. Define ff Initially, all
The variable ff i represents the excessive cost
per processor over the expected cost when branch i is
selected at run time. We define the bottleneck processors
of branch i as the processors fjg that satisfy the
. For all branches fig, repeat the
next step.
3. Choose the set of bottleneck processors, \Theta i , of branch
only. If we decrease - t j by ffi for all j 2 \Theta i , the
variation of the expected cost becomes \DeltaC
until the set \Theta i needs to be
updated. Update \Theta i and repeat step 3.
Now, we consider the example shown in figure 2. Suppose
0:7. The initial profile in our algorithm
is same as Lee's profile as shown in figure 8 (a), which
776 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
happens to be same as Loeffler's profile in this specific ex-
ample. The optimal profile determined by our algorithm is
displayed in figure 8 (b).1
(a) initial profile (b) optimal profile
Fig. 8. Generation of the optimal profile for the conditional construct
in figure 2. (a) initial profile (b) optimal profile.
We generalized the proposed algorithm to the M-way
branch construct by case construct. To realize an M-way
branch, we prefer to using case construct to using a nested
if-then-else constructs. Generalization of the proposed algorithm
and proof of optimality is beyond the scope of this
paper. For the detailed discussion, refer to [17]. For a given
number of assigned processors, the proposed algorithm determines
the optimal profile. To obtain the optimal number
of assigned processors, we compute the total expected cost
for each number and choose the minimum.
VII. An Example
The proposed technique to schedule data-dependent actors
has been implemented in Ptolemy, which is a heterogeneous
simulation and prototyping environment being
developed in U.C.Berkeley, U.S.A. [15]. One of the key
objectives of Ptolemy is to allow many different computational
models to coexist in the same system. A domain is
a set of blocks that obey a common computational model.
An example of mixed domain system is shown in figure 9.
The synchronous dataflow (SDF) domain contains all data-independent
actors and performs compile-time scheduling.
Two branches of the conditional constructs are represented
as SDF subsystems, so their local schedules are generated
by a static scheduler. Using the local schedules of both
branches, the dynamic dataflow(DDF) domain executes the
proposed algorithm to obtain the optimal profile of the conditional
construct. The topmost SDF domain system regards
the DDF domain as a macro actor with the assumed
profile when it performs the global static scheduling.
DDF
Fig. 9. An example of mixed domain system. The topmost level of
the system is a SDF domain. A dynamic construct(if-then-else)
is in the DDF domain, which in turn contains two subsystems in
the SDF domain for its branches.
We apply the proposed scheduling technique to several
synthetic examples as preliminary experiments. These experiments
do not serve as a full test or proof of generality
of the technique. However, they verify that the proposed
technique can make better scheduling decisions than other
simple but ad-hoc decisions on dynamic constructs in many
applications. The target architecture is assumed to be a
shared bus architecture with 4 processors, in which communication
can be overlapped with computation.
To test the effectiveness of the proposed technique, we
compare it with the following scheduling alternatives for
the dynamic constructs.
Method 1. Assign all processors to each dynamic con-
struct
Method 2. Assign only one processor to each dynamic
construct
Method 3. Apply a fully dynamic scheduling ignoring
all overhead
Method 4. Apply a fully static scheduling
Method 1 corresponds to the previous research on quasi-static
scheduling technique made by Lee [10] and by Loeffler
et. al. [13] for data dependent iterations. Method 2 approximately
models the situation when we implement each
dynamic construct as a single big atomic actor. To simulate
the third model, we list all possible outcomes, each of which
can be represented as a data-independent macro actor.
With each possible outcome, we replace the dynamic construct
with a data-independent macro actor and perform
fully-static scheduling. The scheduling result from Method
3 is non-realistic since it ignores all the overhead of the
fully dynamic scheduling strategy. Nonetheless, it will give
a yardstick to measure the relative performance of other
scheduling decisions. By modifying the dataflow graphs,
we may use fully static scheduling in Method 4. For a conditional
construct, we may evaluate both branches and select
one by a multiplexor actor. For a data-dependent iteration
construct, we always perform the worst case number
of iterations. For comparison, we use the average schedule
length of the program as the performance measure.
As an example, consider a For construct of data-dependent
iteration as shown in figure 10. The number
inside each actor represents the execution length. To increase
the parallelism, we pipelined the graph at the beginning
of the For construct.
The scheduling decisions to be made for the For construct
are how many processors to be assigned to the iteration
body and how many iteration cycles to be scheduled
explicitly. We assume that the number of iteration cycles
is uniformly distributed between 1 and 7. To determine the
optimal number of assigned processors, we compare the expected
total cost as shown in table I. Since the iteration
body can utilize two processors effectively, the expected
total cost of the first two columns are very close. How-
ever, the schedule determines that assigning one processor
is slightly better. Rather than parallelizing the iteration
body, the scheduler automatically parallelizes the iteration
cycles. If we change the parameters, we may want to parallelize
the iteration body first and the iteration cycles next.
HA AND LEE: COMPILE-TIME SCHEDULING OF DYNAMIC CONSTRUCTS IN DATAFLOW PROGRAM GRAPHS 777
14 52176
UP- body554
Fig. 10. An example with a For construct at the top level. The
subsystems associated with the For construct are also displayed.
The number inside an actor represents the execution length of
the actor.
The proposed technique considers the tradeoffs of parallelizing
inner loops or parallelizing outer loops in a nested
loop construct, which has been the main problem of parallelizing
compilers for sequential programs. The resulting
Gantt chart for this example is shown in figure 11.
9
Fig. 11. A Gantt chart disply of the scheduling result over 4 processors
from the proposed scheduling technique for the example in
figure 10. The profile of the For construct is identified.
If the number of iteration cycles at run time is less than
or equal to 3, the schedule length of the example is same
as the schedule period 66. If it is greater than 3, the schedule
length will increase. Therefore, the average schedule
length of the example becomes 79.9. The average schedule
length from other scheduling decisions are compared in table
II. The proposed technique outperforms other realistic
methods and achieves 85% of the ideal schedule length by
Method 3. In this example, assigning 4 processors to the
iteration body (Method 1) worsens the performance since
it fails to exploit the intercycle parallelism. Confining the
dynamic construct in a single actor (Method 2) gives the
worst performance as expected since it fails to exploit both
intercycle parallelism and the parallelism of the iteration
body. Since the range of the number of iteration cycles is
not big, assuming the worst case iteration (Method 4) is
not bad.
This example, however, reveals a shortcoming of the proposed
technique. If we assign 2 processors to the iteration
body to exploit the parallelism of the iteration body as well
as the intercycle parallelism, the average schedule length
becomes 77.7, which is slightly better than the scheduling
result by the proposed technique. When we calculate the
expected total cost to decide the optimal number of processors
to assign to the iteration body, we do not account
for the global effect of the decision. Since the difference
of the expected total costs between the proposed technique
and the best scheduling was not significant, as shown in table
I, this non-optimality of the proposed technique could
be anticipated. To improve the performance of the proposed
technique, we can add a heuristic that if the expected
total cost is not significantly greater than the optimal
one, we perform scheduling with that assigned number
and compare the performance with the proposed technique
to choose the best scheduling result.
The search for the assumed number of iteration cycles
for the optimal profile is not faultless either, since the proposed
technique finds a local optimum. The proposed technique
selects 3 as the assumed number of iteration cycles.
It is proved, however, that the best assumed number is
2 in this example even though the performance difference
is negligible. Although the proposed technique is not always
optimal, it is certainly better than any of the other
scheduling methods demonstrated in table II.
Experiments with other dynamic constructs as well as
nested constructs have been performed to obtain the similar
results that the proposed technique outperforms other
ad-hoc decisions. The resulting quasi-static schedule could
be at least 10% faster than other scheduling decisions currently
existent, while it is as little as 15 % slower than
an ideal (and highly unrealistic) fully-dynamic schedule.
In a nested dynamic construct, the compile-time profile of
the inner dynamic construct affects that of the outer dynamic
construct. In general, there is a trade-off between
exploiting parallelism of the inner dynamic construct first
and that of the outer construct first. The proposed technique
resolves this conflict automatically. Refer to [17] for
detailed discussion.
Let us assess the complexity of the proposed scheme. If
the number of dynamic constructs including all nested ones
is D and the number of processors is N , the total number
of profile decision steps is order of ND, O(ND). To determine
the optimal profile also consumes O(ND) time units.
Therefore, the overall complexity is order of ND. The
memory requirements are the same order od magnitude as
the number of profiles to be maintained, which is also order
of ND.
VIII. Conclusion
As long as the data-dependent behavior is not dominating
in a dataflow program, the more scheduling decisions
are made at compile time the better, since we can reduce
the hardware and software overhead for scheduling at run
time. For compile-time decision of task assignment and/or
ordering, we need the static information, called profiles,
of all actors. Most heuristics for compile-time decisions
assume the static information of all tasks, or use ad-hoc
approximations. In this paper, we propose a systematic
method to decide on profiles for each dynamic construct.
We define the compile-time profile of a dynamic construct
as an assumed local schedule of the body of the dynamic
778 IEEE TRANSACTIONS ON COMPUTERS, VOL. 46, NO. 7, JULY 1997
I
The expected total cost of the For construct as a function of the number of assigned processors
Number of assigned processors 1 2 3 4
Expected total cost 129.9 135.9 177.9 N/A
II
Performance comparison among several scheduling decisions
Average schedule length 79.7 90.9 104.3 68.1 90
% of ideal 0.85 0.75 0.65 1.0 0.76
construct. We define the cost of a dynamic construct and
choose its compile-time profile in order to minimize the expected
cost. The cost of a dynamic construct is the sum
of execution length of the construct and the idle time on
all processors at run-time due to the difference between
the compile-time profile and the actual run-time profile.
We discussed in detail how to compute the profile of three
kinds of common dynamic constructs: conditionals, data-dependent
iterations, and recursion.
To compute the expected cost, we require that the statistical
distribution of the dynamic behavior, for example the
distribution of the number of iteration cycles for a data-dependent
iteration, must be known or approximated at
compile-time. For the particular examples we used for ex-
periments, the performance does not degrade rapidly as
the stochastic model deviates from the actual program be-
havior, suggesting that a compiler can use fairly simple
techniques to estimate the model.
We implemented the technique in Ptolemy as a part of
a rapid prototyping environment. We illustrated the effectiveness
of the proposed technique with a synthetic example
in this paper and with many other examples in [17]. The
results are only a preliminary indication of the potential in
practical applications, but they are very promising. While
the proposed technique makes locally optimal decisions for
each dynamic construct, it is shown that the proposed technique
is effective when the amount of data dependency from
a dynamic construct is small. But, we admittedly cannot
quantify at what level the technique breaks down.
Acknowledgments
The authors would like to gratefully thank the anonymous
reviewers for their helpful suggestions. This research
is part of the Ptolemy project, which is supported by the
Advanced Research Projects Agency and the U.S. Air Force
(under the RASSP program, contract F33615-93-C-1317),
the State of California MICRO program, and the following
companies: Bell Northern Research, Cadence, Dolby, Hi-
tachi, Lucky-Goldstar, Mentor Graphics, Mitsubishi, Mo-
torola, NEC, Philips, and, Rockwell.
--R
"Data Flow Languages"
"Synchronous Data Flow"
"Compile-Time Scheduling and Assignment of Dataflow Program Graphs with Data-Dependent Iteration"
"Deterministic Processor Scheduling"
"Multiprocessor Scheduling to Account for Interprocessor Communications"
"A General Approach to Mapping of Parallel Computations Upon Multiprocessor Architecture"
"Task Allocation and Scheduling Models for Multiprocessor Digital Signal Processing"
"Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory"
"Recurrences, Iteration, and Conditionals in Statically Scheduled Block Diagram Languages"
"Path Length Computation on Graph Models of Computations"
"The Effect of Operation Scheduling on the Performance of a Data Flow Computer"
"Hierar- chical Scheduling Systems for Parallel Architectures"
"TDFL: A Task-Level Dataflow Language"
"Ptolemy: A Framework for Simulating and Prototyping Heterogeneous Sys- tems"
"Program Partitioning for a Reconfigurable Multiprocessor System"
"Compile-Time Scheduling of Dataflow Program Graphs with Dynamic Constructs,"
--TR
--CTR
D. Ziegenbein , K. Richter , R. Ernst , J. Teich , L. Thiele, Representation of process mode correlation for scheduling, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.54-61, November 08-12, 1998, San Jose, California, United States
Karsten Strehl , Lothar Thiele , Dirk Ziegenbein , Rolf Ernst , Jrgen Teich, Scheduling hardware/software systems using symbolic techniques, Proceedings of the seventh international workshop on Hardware/software codesign, p.173-177, March 1999, Rome, Italy
Jack S. N. Jean , Karen Tomko , Vikram Yavagal , Jignesh Shah , Robert Cook, Dynamic Reconfiguration to Support Concurrent Applications, IEEE Transactions on Computers, v.48 n.6, p.591-602, June 1999
Yury Markovskiy , Eylon Caspi , Randy Huang , Joseph Yeh , Michael Chu , John Wawrzynek , Andr DeHon, Analysis of quasi-static scheduling techniques in a virtualized reconfigurable machine, Proceedings of the 2002 ACM/SIGDA tenth international symposium on Field-programmable gate arrays, February 24-26, 2002, Monterey, California, USA
Chanik Park , Sungchan Kim , Soonhoi Ha, A dataflow specification for system level synthesis of 3D graphics applications, Proceedings of the 2001 conference on Asia South Pacific design automation, p.78-84, January 2001, Yokohama, Japan
Thies , Michal Karczmarek , Janis Sermulins , Rodric Rabbah , Saman Amarasinghe, Teleport messaging for distributed stream programs, Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, June 15-17, 2005, Chicago, IL, USA
Jin Hwan Park , H. K. Dai, Reconfigurable hardware solution to parallel prefix computation, The Journal of Supercomputing, v.43 n.1, p.43-58, January 2008
Praveen K. Murthy , Etan G. Cohen , Steve Rowland, System canvas: a new design environment for embedded DSP and telecommunication systems, Proceedings of the ninth international symposium on Hardware/software codesign, p.54-59, April 2001, Copenhagen, Denmark
L. Thiele , K. Strehl , D. Ziegenbein , R. Ernst , J. Teich,
FunState | macro actor;dynamic constructs;dataflow program graphs;profile;multiprocessor scheduling |
262549 | Singular and Plural Nondeterministic Parameters. | The article defines algebraic semantics of singular (call-time-choice) and plural (run-time-choice) nondeterministic parameter passing and presents a specification language in which operations with both kinds of parameters can be defined simultaneously. Sound and complete calculi for both semantics are introduced. We study the relations between the two semantics and point out that axioms for operations with plural arguments may be considered as axiom schemata for operations with singular arguments. | Introduction
The notion of nondeterminism arises naturally in describing concurrent systems. Various
approaches to the theory and specification of such systems, for instance, CCS [16], CSP [9],
process algebras [1], event structures [26], include the phenomenon of nondeterminism.
But nondeterminism is also a natural concept in describing sequential programs, either as a
means of indicating a "don't care'' attitude as to which among a number of computational
paths will actually be utilized in a particular computation (e.g., [3]) or as a means of
increasing the level of abstraction [14,25]. The present work proceeds from the theory of
algebraic specifications [4, 27] and generalizes it so that it can be applied to describing
nondeterministic operations.
In deterministic programming the distinction between call-by-value and call-by-name
semantics of parameter passing is well known. The former corresponds to the situation
where the actual parameters to function calls are evaluated and passed as values. The latter
allows parameters which are function expressions, passed by a kind of Algol copy rule [21],
and which are evaluated whenever a need for their value arises. Thus call-by-name will
terminate in many cases when the value of a function may be determined without looking at
(some of) the actual parameters, i.e., even if these parameters are undefined. Call-by-value
will, in such cases, always lead to undefined result of the call. Nevertheless, the call-by-value
semantics is usually preferred in the actual programming languages since it leads to clearer
and more tractable programs.
*) This work has been partially supported by the Architectural Abstraction project under NFR
(Norway), by CEC under ESPRIT-II Basic Reearch Working Group No. 6112 COMPASS, by the
US DARPA under ONR contract N00014-92-J-1928, N00014-93-1-1335 and by the US Air Force
Office of Scientific Research under Grant AFOSR-91-0354.
Following [20], we call the nondeterministic counterparts of these two notions singular
(call-by-value) and plural (call-by-name) parameter passing. Other names applied to this, or
closely related distinction, are call-time-choice vs. run-time-choice [2, 8], or inside-out (IO) vs .
outside-in (OI) which reflect the substitution order corresponding to the respective
semantics [5, 6]. In the context where one allows nondeterministic parameters the difference
between the two semantics becomes quite apparent even without looking at their
termination properties. Let us suppose that we have defined operation g(x) as "if x=0 then a
else (if x=0 then b else c)", and that we have a nondeterministic choice operation #.
returning an arbitrary element from the argument set. The singular interpretation will
satisfy the formula f: then a else c), while the plural interpretation need not
satisfy this formula. For instance, under the singular interpretation g(#.{0,1}) will yield
either a or c, while the set of possible results of g(#.{0,1}) under the plural interpretation will
be {a,b,c}. (Notice that in a deterministic environment both semantics would yield the same
results.) The fact that the difference between the two semantics occurs already in very trivial
examples of terminating nondeterministic operations motivates our investigation.
We discuss the distinction between the singular and plural passing of nondeterministic
parameters in the context of algebraic semantics focusing on the associated reasoning
systems. The singular semantics is given by multialgebras, that is, algebras where functions
are set-valued and where these values correspond to the sets of possibile results returned by
nondeterministic operations. Thus, if f is a nondeterministic operation, f(t) will denote the set
of possible results returned by f when applied to t. We introduce the calculus NEQ which is
sound and complete with respect to this semantics.
Although terms may denote sets the variables in the language range only over
individuals. This is motivated by the interest in describing unique results returned by each
particular application of an operation (execution of the program). It gives us the possibility
of writing, instead of a formula F(f(t)) which expresses something about the whole set of
possible results of f(t), the formula corresponding to x# f(t) # F(x) which express something
about each particular result x returned by f(t). Unfortunately, this poses the main problem
of reasoning in the context of nondeterminism - the lack of general substitutivity. From the
fact that h(x) is deterministic (for each x, has a unique value) we cannot conclude that so is
h(t) for an arbitrary term t. If t is nondeteministic, h(t) may have several possible results. The
calculus NEQ is designed so that it appropriately restricts the substitution of terms for
singular variables.
Although operations in multialgebras are set-valued their carriers are usual sets. Thus
operations map individuals to sets. This is not sufficient to model plural arguments. Such
arguments can be understood as sets being passed to the operation. The fact that, under
plural interpretation, g(x) as defined above need not satisfy f results from the two
occurrences of x in the body of g. Each of these occurrences corresponds to a repeated
application of choice from the argument set x, that is, potentially, to a different value. I n
order to model such operations we take as the carrier of the algebra a (subset of the) power
set - operations map sets to sets. In this way we obtain power algebra semantics. The
extension of the semantics is reflected at the syntactic level by introduction of plural
variables ranging over sets rather than over individuals. The sound and complete extension
of NEQ is obtained by adding one new rule which allows for usual substitution of arbitrary
terms for plural variables.
The structure of the paper is as follows. In sections 2-3 we introduce the language for
specifying nondeterministic operations and explain the intuition behind its main features. I n
section 4 we define multialgebraic semantics for singular specifications and introduce a sound
and complete calculus for such specifications. In section 5 the semantics is generalized to
power algebras capable of modeling plural parameters, and the sound and complete extension
of the calculus is obtained by introducing one additional rule. A comparison of both
semantics in section 6 is guided by the similarity of the respective calculi. We identify the
subclasses of multimodels and power models which may serve as equivalent semantics of one
specification. We also highlight the increased complexity of the power algebra semantics
reflecting the problems with intuitive understanding of plural arguments.
Proofs of the theorems are merely indicated in this presentation. It reports some of the
results from [24] where the full proofs and other details can be found.
2. The specification language
A specification is a pair ((,P) where the signature ( is a pair (S,F) of sorts S and operation
(with argument and result sorts in S). The set of terms over a signature ( and
variable set X is denoted by W (,X . We always assume that, for every sort S, the set of ground
words of sort S, S W ( , is not empty. 1
P is a set of sequents of atomic formulae written as a 1 ,.,a n a e 1 ,.,e m . The left hand side
(LHS) of a is called the antecedent and the right hand side (RHS) the consequent, and both are
to be understood as sets of atomic formulae (i.e., the ordering and multiplicity of the atomic
formulae do not matter). In general, we allow either antecedent or consequent to be empty,
though - is usually dropped in the notation. A sequent with exactly one formula in the
consequent (m=1) is called a Horn formula, and a Horn formula with empty antecedent (n=0)
is a simple formula (or a simple sequent).
This restriction is motivated by the fact (pointed out in [7]) that admitting empty carriers requires
additional mechanisms (explicit quantification) in order to obtain sound logic. We conjecture that s i m i l ar
solution can be applied in our case.
Singular and Plural Nondeterministic ParametersAll variables occurring in a sequent are implicitly universally quantified over the whole
sequent. A sequent is satisfied if, for every assignment to the variables, one of the
antecedents is false or one of the consequents is true (it is valid iff the formula a 1 #a n #
For any term (formula, set of formulae) j, V[j] will denote the set of variables in j. If
the variable set is not mentioned explicitly, we may also write x#V to indicate that x is a
variable.
An atomic formula in the consequent is either an equation, t=s, or an inclusion, t#s, of
terms t, s#W (,X . An atomic formula in the antecedent, written tas, will be interpreted as non-empty
intersection of the (result) sets corresponding to t and s. For a given specification
SP=((,P), L(SP) will denote the above language over the signature (.
The above conventions will be used throughout the paper. The distinction between the
singular and plural parameters (introduced in the section 5) will be reflected in the notation
by the superscript * : a plural variable will be denoted by x * , the set of plural variables in a
term t by V * [t], a specification with plural arguments SP * , the corresponding extension of
the language L by L * etc.
3. A note on the intuitive interpretation
Multialgebraic semantics [10, 13] interprets specifications in some form of power structures
where the (nondeterministic) operations correspond to set-valued functions. This means that
a (ground) term is interpreted as a set of possibilities - it denotes the set of possible results of
the corresponding operation. We, on the other hand, want our formulae to express necessary
i.e., facts which have to hold in every evaluation of a program (specification). This is
achieved by interpreting terms as applications of the respective operations. Every two
syntactic occurrences of a term t will refer to possibly distinct applications of t. For
nondeterministic terms this means that they may denote two distinct values.
Typically, equality is interpreted in a multialgebra as set equality [13, 23, 12]. For
instance, the formula a t=s means that the sets corresponding to all possible results of the
operations t and s are equal. This gives a model which is mathematically plausible, but which
does not correspond to our operational intuition. The (set) equality a t=s does not guarantee
that the result returned by some particular application of t will actually be equal to the result
returned by an application of s. It merely tells us that in principle (in all possible executions)
any result produced by t can also be produced by s and vice versa.
Equality in our view should be a necessary equality which must hold in every evaluation
of a program (specification). It does not correspond to set equality, but to identity of 1-element
sets. Thus the simple formula a t=s will hold in a multistructure M iff both t and s are
interpreted in M as one and the same set which, in addition, has only one element. Equality is
then a partial equivalence relation and terms t for which a t=t holds are exactly the
deterministic terms, denoted by D SP ,X . This last equality indicates that arbitrary two
applications of t have to return the same result.
If it is possible to produce a computation where t and s return different results - and this
is possible when they are nondeterministic - then the terms are not equal but, at best,
equivalent. They are equivalent if they are capable of returning the same results, i.e., if they
are interpreted as the same set. This may be expressed using the inclusion relation: s#t holds
iff the set of possible results of s is included in the set of possible results of t, and s#t if each
is included in the other.
Having introduced inclusion one might expect that a nondeterministic operation can be
specified by a series of inclusions - each defining one of its possible results. However, such a
specification gives only a "lower bound" on the admitted nondeterminism. Consider the
following example:
Example 3.1
S: { Nat },
F: 0: # Nat (zero)
_#_: Nat-Nat # Nat (binary nondeterministic choice)
P: 1. a 0=0
2. a s(x)=s(x)
3. 1a0 a (As usual, we abbreviate s n (0) as n.)
4. a 0 # 0#1 a 1 # 0#1
The first two axioms make zero and successor deterministic. A limited form of negation is
present in L in the form of sequents with empty consequent. Axiom 3. makes 0 distinct from
Axioms 4. make then # a nondeterministic choice with 0 and 1 among its possible results.
This, however, ensures only that in every model both 0 and 1 can be returned by 0#1. I n
most models all other kinds of elements may be among its possible results as well, since no
extension of the result set of 0#1 will violate the inclusions of 4. If we are satisfied with this
degree of precision, we may stop here and use only Horn formula. All the results in the rest
of the paper apply to this special case. But to specify an "upper bound" of nondeterministic
operations we need disjunction - the multiple formulae in the consequents. Now, if we write
the axiom:
5. a 0#1=0, 0#1=1
the two occurrences of 0#1 refer to two arbitrary applications and, consequently, we obtain
Singular and Plural Nondeterministic Parametersthat either any application of 0#1 equals 0 or else it equals 1, i.e., that # is not really
nondeterministic, but merely underspecified. Since axioms 4. require that both 0 and 1 be
among the results of t, the addition of 5. will actually make the specification inconsistent.
What we are trying to say with the disjunction of 5. is that every application of 0#1
returns either 0 or 1, i.e., we need a means of identifying two occurrences of a
nondeterministic term as referring to one and the same application. This can be done b y
binding both occurrences to a variable.The appropriate axiom will be:
59. xa0#1 a x=0, x=1
The axiom says: whenever 0#1 returns x, then x equals 0 or x equals 1. Notice that such an
interpretation presupposes that the variable x refers to a unique, individual value. Thus
bindings have the intended function only if they involve singular variables. (Plural variables,
on the other hand, will refer to sets and not individuals, and so the axiom
599. x * a0#1 a x * =0, x * =1
would have a completely different meaning.) The singular semantics is the most common in
the literature on algebraic semantics of nondeterministic specification languages [2, 8, 11], in
spite of the fact that it prohibits unrestricted substitution of terms for variables. Any
substitution must now be guarded by the check that the substituted term yields a unique
value, i.e., is deterministic. We return to this point in the subsection on reasoning where we
introduce a calculus which does not allow one, for instance, to conclude 0#1=0#1 a 0#1=0,
0#1=1 from the axiom 59 (though it could be obtained from 599).
4. The singular case: semantics and calculus
This section defines the multialgebraic semantics of specifications with singular arguments
and introduces a sound and complete calculus.
4.1. Multistructures and multimodels.
Definition 4.2 (Multistructures). Let ( be a signature. M is a (-multistructure if
(1) its carrier _M_ is an S-sorted set and
(2) for every f: S 1 -S n #S in F, there is a corresponding fuction
A function F: A#B (i.e., a family of functions
for every S#S) is a
multihomomorphism from a (-multistructure A to B if
for each constant symbol c#F, F(c A
(H2) for every f: S 1 -S n #S in F and a 1 .a n #S 1
A
F(f A (a 1 .a n
If all inclusions in H1 and H2 are (set) equalities the homomorphism is tight,
otherwise it is strictly loose (or just loose).
denotes the set of non-empty subsets of the set S. Operations applied to sets refer to
their unique pointwise extensions. Notice that for a constant c: #S, 2. indicates that c M can
be a set of several elements of sort S.
Since multihomomorphisms are defined on individuals and not sets they preserve
singletons and are #-monotonic. We denote the class of (-multistructures by MStr((). It has
the distinguished word structure MW ( defined in the obvious way, where each ground term
is interpreted as a singleton set. We will treat such singleton sets as terms rather than 1-
element sets (i.e., we do not take special pains to distinguish MW ( and W ( ). MW ( is not an
initial (-structure since it is deterministic and there can exist several homomorphisms from
it to a given multistructure. We do not focus on the aspect of initiality and merely register
the useful fact from [11]:
4.3. M is a (-multistructure iff, for every set of variables X and assignment b:
X#_M_, there exists a unique function b[_]: W (,X #P + (_M_) such that:
1.
2.
3. b[f(t
In particular, for X=-, there is a unique interpretation function (not a multihomomorphism)
satisfying the last two points of this definition.
As a consequence of the definition of multistructures, all operations are #-monotonic,
i.e., b[s]#b[t] # b[f(s)]#b[f(t)]. Notice also that assignment in the lemma (and in general,
whenever it is an assignment of elements from a multistructure) means assignment of
individuals, not sets.
Next we define the class of multimodels of a specification.
Definition 4.4 (Satisfiability). A (-multistructure M satisfies an L(() sequent p
for every b: X#M we have
where A#B iff A and B are the same 1-element set.
An SP-multimodel is a (-multistructure which satisfies all the axioms of SP. We
denote the class of multimodels of SP by MMod(SP).
The reason for using nonempty intersection (and not set equality) as the interpretation of a
in the antecedents is the same as using "elementwise" equality # in the consequents. Since we
Singular and Plural Nondeterministic Parametersavoid set equality in the positive sense (in the consequents), the most natural negative form
seems to be the one we have chosen. For deterministic terms this is the same as equality, i.e.,
deterministic antecedents correspond exactly to the usual (deterministic) conditions. For
nondeterministic terms this reflects our interest in binding such terms: the sequent
".sat.a." is equivalent to ".xas, xat.a. A binding ".xat.a." is also equivalent to
the more familiar ".x#t.a.", so the notation sat may be read as an abbreviation for the
more elaborate formula with two # and a new variable x not occurring in the rest of the
sequent.
For a justification of this, as well as other choices we have made here, the reader is
referred to [24].
4.2. The calculus for singular semantics
In [24] we have introduced the calculus NEQ which is sound and complete with respect to
the class MMod(SP). Its rules are:
R1: a x=x x#V
R2:
G D G D
G G D D
x
x
x
x
a a
a
R3:
G D G D
G G D D
x
x
x
x
a a
a
- x not in a RHS of #
R4: a) xay a x=y b) xat a x#t x,y#V
R5:
G D G D
G G D D
a a
a
a (CUT) (# stands for either = or #)
a)
G D
G D
a
a , e
G D
a
a
R7: G D
G D
x
x
a a
a
x#V-V[t], at most one x in G a D (ELIM)
a denotes G with b substituted for a. Short comments on each of the rules may be in order.
The fact that '=' is a partial equivalence relation is expressed in R1. It applies only to
variables and is sound because all assignments assign individual values to the (singular)
variables.
is a paramodulation rule allowing replacement of terms which may be deterministic
(in the case when t 1 =t 2 holds in the second assumption). In particular, it allows derivation of
the standard substitution rule when the substituted terms are deterministic, and prevents
substitution of nondeterministic terms for variables.
R3 allows "specialization" of a sequent by substituting for a term t 2 another term t 1 which
is included in t 2 . The restriction that the occurrences of t 2 which are substituted for don't
occur in the RHS of # is needed to prevent, for instance, the unsound conclusion a t 3 #t 1
from the premises a t 3 #t 2 and a t 1 #t 2 .
R4 and R5 express the relation between a in the antecedent and the equality and
inclusion in the consequent. The axiom of standard sequent calculus, e a e, (i.e., sat a s#t)
does not hold in general here because the antecedent corresponds to non-empty intersection
of the result sets while the consequent to the inclusion (#) or identity of 1-element (=) result
sets. Only for deterministic terms s, t, do we have that sa t a s=t holds.
R5 allows us to cut both a s=t and a s#t with sa t a D.
R7 eliminates redundant bindings, namely those that bind an application of a term
occurring at most once in the rest of the sequent.
We will write P # CAL p to indicate that p is provable from P with the calculus CAL.
When we need to write the sequent p explicitly this notation becomes sometimes awkward,
and so we will optionally write it as P
The counterpart of soundness/completeness of the equational calculus is [24]:
Theorem 4.5. NEQ is sound and complete wrt. MMod(SP):
Proof idea:
Soundness is proved by induction on the length of the proof P # NEQ p. The proof of
the completeness part is a standard, albeit rather involved, Henkin-style argument.
The axiom set P of SP is extended by adding all L(SP) formulae p which are
consistent with P (and the previously added formulae). If the addition of p leads to
inconsistency, one adds the negation of p. Since empty consequents provide only a
restricted form of negation, the general negation operation is defined as a set of
formulae over the original signature extended with new constants. One shows then
that the construction yields a consistent specification with a deterministic basis from
which a model can be constructed.
We also register an easy lemma that the set-equivalent terms, t#s satisfy the same formulae:
Lemma 4.6. t#s iff, for any sequent p, P# NEQ p t
z iff P# NEQ p s
z .
Singular and Plural Nondeterministic Parameters5. The plural case: semantics and calculus
The singular semantics for passing nondeterminate arguments is the most common notion to
be found in the literature. Nevertheless, the plural semantics has also received some
attention. In the denotational tradition most approaches considered both possibilities [18,
19, 20, 22]. Engelfriet and Schmidt gave a detailed study of both - in their language, IO and
OI - semantics based on tree languages [5], and continuous algebras of relations and power
sets [6]. The unified algebras of Mosses and the rewriting logic of Meseguer [15] represent
other algebraic approaches distinguishing these aspects.
We will define the semantics for specifications where operations may have both singular
and plural arguments. The next subsection gives the necessary extension of the calculus
NEQ to handle this generalized situation.
5.1. Power structures and power models
Singular arguments (such as the variables in L) have the usual algebraic property that they
refer to a unique value. This reflects the fact that they are evaluated at the moment of
substitution and the result is passed to the following computation. Plural arguments, on the
other hand, are best understood as textual parameters. They are not passed as a single value,
but every occurrence of the formal parameter denotes a distinct application of the operation.
We will allow both singular and plural parameter passing in one specification. The
corresponding semantic distinction is between power set functions which are merely #-
monotonic and those which also are #-additive.
In the language we merely introduce a notational device for distinguishing the singular
and plural arguments. We allow annotating the sorts in the profiles of the operation by a
superscript, like S * , to indicate that an argument is plural.
Furthermore, we partition the set of variables into two disjoint subsets of singular, X,
and plural, X * , variables. x and x * are to be understood as distinct symbols. We will say that
an operation f is singular in the i-th argument iff the i-th argument (in its signature) is
singular. The specification language extended with such annotations of the signatures will be
referred to as L * .
These are the only extensions of the language we need. We may, optionally, use
superscripts t * at any (sub)term to indicate that it is passed as a plural argument. The
outermost applications, e.g. f in f(.), are always to be understood plurally, and no
superscripting will be used at such places.
Definition 5.7. Let ( be a L * -signature. A is a (-power structure, A#PStr((), iff A is a
(deterministic) structure such that:
1. for every sort S, the carrier S A is a (subset of the) power set P
of some basis
set S -
2. for every f: S 1 -S n #S in (, f A is a # -monotonic function S 1
A
A
#S A such
that, if the i-th argument is S i (singular) then f A is singular in the i-th argument.
The singularity in the i-th argument in this definition refers not to the syntactic notion but
to its semantic counterpart:
Definition 5.8. A function f A
A
A
#S A in a power structure A is singular in the
i-th argument iff if it is #-additive in the i-th argument, i.e., iff for all x i #S i
A and all
A (for k-i), f A (.x 1 .x i .x n
Thus, the definition of power structures requires that syntactic singularity be modeled b y
the semantic one.
Note the unorthodox point in the definition - we do not require the carrier to be the
whole power set, but allow it to be a subset of some power set. Usually one assumes a
primitive nondeterministic operation with the predefined semantics as set union. Then all
finite subsets are needed for the interpretation of this primitive operator. Also, the join
operation (under the set inclusion as partial order) corresponds exactly to set union only if
all sets are present (see example 6.18). None of these assumptions seem necessary.
Consequently, we do not assume any predefined (choice) operation but, instead, give the
user means of specifying any nondeterministic operation (including choice) directly.
Let ( be a signature, A a (-power structure, X a set of singular and X * a set of plural
variables, and b an assignment X#X *
# _A_ such that for all x#X (Saying
"assignment" we will from now on mean only assignments satisfying this last condition.)
Then, every term t(x,x * )#W (,X,X * has a unique set interpretation b[t(x,x * )] in A defined as
t A (b(x),b(x * )).
Definition 5.9 (Satisfiability). Let A be a (-power structure and p: t i as i a
be a sequent over L * ((,X,X * ). A satisfies p, A-p, iff for every assignment b:
# _A_, we have that:
A is a power model of the specification SP=((,P), A#PMod(SP), iff A#PStr(() and A
satisfies all axioms from P.
Except for the change in the notion of an assignment, this is identical to the definition 4.4,
which is the reason for retaining the same notation for the satisfiability relation.
Singular and Plural Nondeterministic Parameters5.2. The calculus for plural parameters
The calculus NEQ is extended with one additional rule:
R8: G D
G D
a
a
x
x
Rules R1-R7 remain unchanged, but now all terms t i belong to W (,X,X * . In particular, any t i
may be a plural variable. We let NEQ * denote the calculus NEQ+R8.
The new rule R8 expresses the semantics of plural variables. It allows us to substitute an
arbitrary term t for a plural variable x * . Taking t to be a singular variable x, we can thus
exchange plural variables in a provable sequent p with singular ones. The opposite is, in
general, not possible because rule applies only to singular variables. For instance, a
plural variable x * will satisfy a x *
#x * but this is not sufficient for performing a general
substitution for a singular variable. The main result concerning PMod and NEQ * is:
Theorem 5.10. For any L * -specification SP and L * (SP) sequent p:
Proof idea:
The proof is a straightforward extension of the proof of theorem 4.5.
6. Comparison
Since plural and singular semantics are certainly not one and the same thing, it may seem
surprising that essentially the same calculus can be used for reasoning about both. One
would perhaps expect that PMod, being a richer class than MMod, will satisfy fewer formulae
than the latter, and that some additional restrictions of the calculus would be needed to
reflect the increased generality of the model class. In this section we describe precisely the
relation between the L and L * specifications (6.1) and emphasize some points of difference
(6.2).
6.1. The "equivalence" of both semantics
The following example illustrates a strong sense of equivalence of L and L * .
Example 6.11
Consider the following plural definition:
a
It is "equivalent" to the collection of definitions
a f(t) # if t=t then 0 else 1
for all terms t.
In the rest of this section we will clarify the meaning of this "equivalence".
Since the partial order of functions from a set A to the power set of a set B is isomorphic
to the partial order of additive (and strict, if we take P (all subsets) instead of
from the power set of A to the power set of B, [A#P(B)] # [P(A) # P(B)], we may consider
every multistructure A to be a power structure A * by taking _A * extending all
operations in A pointwise. We then have the obvious
Lemma 6.12. Let SP be a singular specification (i.e., all operations are singular in all
arguments), let A#MStr(SP), and p be a sequent in L(SP). Then A-p iff A * -p, and so
#PMod(SP).
Call an L * sequent p p-ground (for plurally ground) if it does not contain any plural
variables.
Theorem 6.13. Let SP * ,P * ) be an L * specification. There exists a (usually infinite)
specification SP=((,P) such that
1.
2. for any p-ground p#L * (SP * ) . PMod(SP * )-p iff MMod(SP)-p.
Proof:
Let ( be ( * with all " * " symbols removed. This makes 1. true. Any p-ground p as in 2.
is then a p over the language L((,X).
The axioms P are obtained from P * as in the example 6.11. For every p *
#P * with
plural variables x 1
Obviously, for any p#L(SP) if P# NEQ p then P *
then the proof
can be simulated in NEQ. Let p9(x * ) be the last sequent used in the NEQ * -proof which
contains plural variables x * , and the sequent p9 be the next one obtained by R8.
Build the analogous NEQ-proof tree with all plural variables replaced by the terms
which occupy their place in p9. The leaves of this tree will be instances of the P *
axioms with plural variables replaced by the appropriate terms, and all such axioms
are in P.
Then soundness and completness of NEQ and NEQ * imply the conclusion of the
theorem.
Singular and Plural Nondeterministic ParametersWe now ask whether, or under which conditions, the classes PMod and MMod are
interchangeable as the models of a specification. Let SP * , SP be as in the theorem. The one
way transition is trivial. Axioms of SP are p-ground so PMod(SP * ) will satisfy all these axioms
by the theorem. The subclass #PMod(SP * )#PMod(SP * ) where, for every P#PMod(SP * ), all
operations are singular, will yield a subclass of MMod(SP).
For the other direction, we have to observe that the restriction to p-ground sequents in
the theorem is crucial because plural variables range over arbitrary - also undenotable -
sets. Let MMod * (SP) denote the class of power structures obtained as in lemma 6.12. It is not
necessarily the case that MMod * (SP)-P * as the following argument illustrates.
Example 6.14
Let M *
#MMod * (SP) have infinite carrier, p *
#P * be t i as i a
and let b: X#X *
# _M * _ be an assignment such that b(x * )={m 1 .m l .} is a set which is
not denoted by any term in W (,X . Let b l be an assignment equal to b except that
b l (x * )={m l }, i.e., b=U
l
b l . Then
(a) M * - U
l
l
b l [s l
l
l
l
l
since operations in M * are defined by pointwise extension. M *
#MMod * (SP) implies
that, for all l
But (b) does not necessarily imply (a). In particular, even if for all l, all intersections
in the antecedent of (b) are empty, those in (a) may be non-empty. So we are not
guaranteed that M *
#PMod(SP * ).
Thus, the intuition that the multimodels are contained in the power models is not quite
correct. To ensure that no undenotable sets from M * can be assigned to the plural variables
we redefine the lifting operator * : MMod(SP)# PMod(SP) from 6.12.
Definition 6.15. Given a singular specification SP, and M#MMod(SP), we denote b y
-M the following power structure
is such that
a) for every n#_M_: {n}#_-M_,
b) for every m#_-M_ there exist a t#W (,X , n#_M_, such
2) the operations in -M can be then defined by: f(m)
Then, for any assignment b: X *
#_-M_ there exists an assignment u: X *
(1b), and an
assignment a: X#_M_ (1a) such that b(x * i.e., such that the following diagram
commutes:
x * -M
a
x
Since M#MMod(SP) it satisfies all the axioms P obtained from P * and the commutativity of
the diagram gives us the second part of:
Corollary 6.16. Let SP * and SP be as in the theorem 6.13. Then
The corollary makes precise the claim that the class of power models of a plural specification
SP * may be seen as a class of multimodels of some singular specification SP, and vice versa.
The reasoning about both semantics is essentially the same because the only difference
concerns the (arbitrary) undenotable sets which can be referred to by plural variables.
6.2. Plural specification of choice
Plural variables provide access to arbitrary sets. In the following example we attempt to
utilize this fact to give a more concise form to the specification of choice.
Example 6.17
The specification
S: { S }
F: { #._ : S *
P: { a#.x *
#x * }
defines #. as the choice operator - for any argument t, #.t is capable of returning
any element belonging to the set interpreting t.
The specification may seem plausible but there are several difficulties. Obviously, such a
Singular and Plural Nondeterministic Parameterschoice operation would be redundant in any specification since the axiom makes #.t
observationally equivalent to t, and lemma 4.6 allows us to remove any occurrences of #.
from the (derivable) formulae. Furthermore, observe how such a specification confuses the
issue of nondeterministic choice. Choice is supposed to take a set as an argument and return
one element from the set, or, perhaps, to convert an argument of type "set" to a result of type
"individual". This is the intention of writing the specification above. But power algebras
model all operations as functions between power sets and such a "conversion" simply does
not make sense. The only points where conversion of a set to an individual takes place is
when a term is passed as a singular argument to another operation. If we have an operation
with a singular argument f: S#S, then f(t) will make (implicitly) the choice from t.
This might be particularly confusing because one tends to think of plural arguments as
sets and mix up the semantic sets (i.e., the elements of the carrier of a power algebra) and the
syntactic ones (as expressed by the profiles of the operations in the signature). As a matter
of fact, the above specification does not at all express the intention of choosing an element
from the set. In order to do that it would have to give choice the signature Set(S)#S.
Semantically, this would be then a function from P + (Set(S)) to P Assuming that
semantics of Set(S) will somehow correspond to the power set construction, this makes
things rather complicated, forcing us to work with a power set of a power set. Furthermore,
since Set(S) and S are different sorts we cannot let the same variable range over both as was
done in the example above.
The above example and remarks illustrate some of the problems with the intuitive
understanding of plural parameters. Power algebras - needed for modelling such parameters
significantly complicate the model of nondeterminism as compared to multialgebras.
On the other hand, plural variables allow us to specify the "upper bound" of
nondeterministic choice without using disjunction. The choice operation can be specified as
the join which under the partial ordering # interpreted as set inclusion will correspond to
set union (cf. [17]).
Example 6.18
The following specification makes binary choice the join operation wrt. # :
S: { S }
F: { _#_ : S-S # S }
P: { 1. a x *
#y * a y *
#y *
2. xaz * , yaz * a x#y # z * }
Axiom 2, although using singular variables x, y, does specify the minimality of # with
respect to all terms. (Notice that the axiom x * az * , y * az * a x *
#y *
# z * would have a different,
and in this context unintended, meaning.) We can show that whenever a t#p and a s#p
hold (for arbitrary terms) then so does a t#s#p.
x z z z
x
a a
a a
a
a
a
a
a
a
a a
a
a
a
Violating our formalism a bit, we may say that the above proof shows the validity of the
formula stating the expected minimality of join: t#p, s#p a t#s#p.
Thus, in any model of the specification from 6.18 # will be a join. It is then natural to
consider # as the basic (primitive) operation used for defining other nondeterministic
operations. Observe also that in order to ensure that join is the same as set union, we have
to require the presence of all (finite) subsets in the carrier of the model. For instance, the
power structure A with the carrier
{ {1},{2},{3},{1,2,3} } and
# A defined as x A
will be a model of the specification although # A is not the same as set union.
7. Conclusion
We have defined the algebraic semantics for singular (call-time-choice) and plural (run-time-
choice) passing of nondeterministic parameters. One of the central results reported in the
paper is soundness and completeness of two new reasoning systems, NEQ and NEQ * ,
respectively, for singular and plural semantics. The plural calculus NEQ * is a minimal
extension of NEQ which merely allows unrestricted substitution for plural variables. This
indicated a close relationship between the two semantics. We have shown that plural
specifications have equivalent (modulo undenotable sets) singular formulations if one
considers the plural axioms as singular axiom schemata.
Acknowledgments
We are grateful to Manfred Broy for pointing out the inadequacy of our original notation and
to Peter D. Mosses for the observation that in the presence of plural variables choice may be
specified as join with Horn formulae.
--R
"Algebra of communicating processes"
"Nondeterministic call by need is neither lazy nor by name"
A Discipline of Programming
Fundamentals of Algebraic Specification
"IO and OI. 1"
"IO and OI. 2"
"Completeness of Many-Sorted Equational Logic"
"The semantics of call-by-value and call-by-name in a nondeterministic environment"
Nondeterminism in Algebraic Specifications and Algebraic Programs
"Rewriting with a Nondeterministic Choice Operator"
Towards a theory of abstract data types
"An Abstract Axiomatization of Pointer Types"
"Conditional rewriting logic as a unified model of concurrency"
Calculi for Communicating Systems
"Unified Algebras and Institutions"
Introducing Girard's quantitative domains
"Domains"
"An axiomatic treatment of ALGOL 68 routines"
"Power domains"
"Nondeterminism in Abstract Data Types"
Algebraic Specifications of Nondeterminism
"An introduction to event structures"
"Algebraic Specification"
--TR | many-sorted algebra;sequent calculus;nondeterminism;algebraic specification |
262588 | Adaptive Multilevel Techniques for Mixed Finite Element Discretizations of Elliptic Boundary Value Problems. | We consider mixed finite element discretizations of linear second-order elliptic boundary value problems with respect to an adaptively generated hierarchy of possibly highly nonuniform simplicial triangulations. By a well-known postprocessing technique the discrete problem is equivalent to a modified nonconforming discretization which is solved by preconditioned CG iterations using a multilevel preconditioner in the spirit of Bramble, Pasciak, and Xu designed for standard nonconforming approximations. Local refinement of the triangulations is based on an a posteriori error estimator which can be easily derived from superconvergence results. The performance of the preconditioner and the error estimator is illustrated by several numerical examples. | Introduction
.
In this work, we are concerned with adaptive multilevel techniques for the efficient
solution of mixed finite element discretizations of linear second order
elliptic boundary value problems. In recent years, mixed finite element methods
have been increasingly used in applications, in particular for such problems
where instead of the primal variable its gradient is of major interest. As examples
we mention the flux in stationary flow problems or neutron diffusion
and the current in semiconductor device simulation (cf. e.g. [4], [13], [14], [22],
[27], [36], [42] and [44]). An excellent treatment of mixed methods and further
references can be found in the monography of Brezzi and Fortin [12].
Mixed discretization give rise to linear systems associated with saddle point
problems whose characteristic feature is a symmetric but indefinite coefficient
matrix. Since the systems typically become large for discretized partial differential
equations, there is a need for fast iterative solvers. We note that preconditioned
iterative methods for saddle point problems have been considered by
Bank, Welfert and Yserentant [8] based on a modification on Uzawa's method
leading to an outer/inner iterative scheme and by Rusten and Winther [43] relying
on the minimum residual method. Moreover, there are several approaches
using domain decomposition techniques and related multilevel Schwarz iterations
(cf. e.g. Cowsar [15], Ewing and Wang [23, 24, 25], Mathew [32, 33]
and Vassilevski and Wang [46]). A further important aspect is to increase efficiency
by using adaptively generated triangulations. In contrast to the existing
concepts for standard conforming finite element discretizations as realized for
example in the finite element codes PLTMG [5] and KASKADE [19, 20], not
much work has been done concerning local refinement of the triangulations
in mixed discretizations. There is some work by Ewing et al. [21] in case
of quadrilateral mixed elements but the emphasis is more on the appropriate
treatment of the slave nodes then on efficient and reliable indicators for local
refinement. It is the purpose of this paper to develop a fully adaptive algorithm
for mixed discretizations based on the lowest order Raviart-Thomas elements
featuring a multilevel iterative solver and an a posteriori error estimator as
indicator for local refinement. The paper is organized as follows:
In section 2 we will present the mixed discretization and a postprocessing
technique due to Fraeijs de Veubeke [26] and Arnold and Brezzi [1]. This technique
is based on the elimination of the continuity constraints for the normal
components of the flux on the interelement boundaries from the conforming
Raviart-Thomas ansatz space. Instead, the continuity constraints are taken
care of by appropriate Lagrangian multipliers resulting in an extended saddle
point problem. Static condensation of the flux leads to a linear system which
is equivalent to a modified nonconforming approach involving the lowest order
Crouzeix-Raviart elements augmented by cubic bubble functions. Section 3 is
devoted to the numerical solution of that nonconforming discretization by a
multilevel preconditioned cg-iteration using a BPX-type preconditioner. This
preconditioner has been designed by the authors [30, 49] for standard non-conforming
approaches and is closely related to that of Oswald [39]. By an
application of Nepomnyaschikh's fictitious domain lemma [34, 35] it can be
verified that the spectral condition number of the preconditioned stiffness matrix
behaves like O(1). In section 4 we present an a posteriori error estimator
in terms of the L 2 -norm which can be easily derived from a superconvergence
result for mixed discretizations due to Arnold and Brezzi [1]. It will be shown
that the error estimator is equivalent to a weighted sum of the squares of
the jumps of the approximation of the primal variable across the interelement
boundaries. Finally, in section 5 some numerical results are given illustrating
both the performance of the preconditioner and the error estimator.
Mixed discretization and postprocessing.
We consider linear, second order elliptic boundary value problems of the form
\Gammadiv(a
@\Omega
stands for a bounded, polygonal domain in the Euclidean space IR 2
with boundary \Gamma and f is a given function in L
2(\Omega\Gamma4 We further assume that
i;j=1 is a symmetric 2 \Theta 2 matrix-valued function with a ij 2 L
and b is a function in L
satisfying
for almost all x 2 \Omega\Gamma We note that only for simplicity we have chosen homogeneous
Dirichlet boundary conditions in (2.1). Other boundary conditions of
Neumann type or mixed boundary conditions can be treated as well. Introducing
the Hilbert space
ae
oe
and the flux
as an additional unknown, the standard mixed formulation of (2.1) is given as
follows:
Find (j; u) 2 H(div; \Omega\Gamma \Theta L
2(\Omega\Gamma such that
where the bilinear forms a : H(div; \Omega\Gamma \Theta
are given by
R
\Omega
R
\Omega
divq
R
\Omega
bu
and (\Delta; \Delta) 0 stands for the usual L 2 -inner product. Note that under the above
assumption on the data of the problem the existence and uniqueness of a
solution to (2.3) is well established (cf. e.g. [12]). For the mixed discretization
of (2.3) we suppose that a regular simplicial triangulation T h
of\Omega is given. In
particular, for an element K 2 T h we refer to e i its edges and
we denote by E h the set of edges of T h and by E 0
the
subsets of interior and boundary edges, respectively. Further, for D
'\Omega we
refer to jDj as the measure of D and we denote by P k (D), k - 0, the linear
space of polynomials of degree - k on D. Then, a conforming approximation
of the flux space
H(div;\Omega\Gamma is given by V h := RT
RT
and RT 0 (K) stands for the lowest order Raviart-Thomas element
Note that any q h 2 RT 0 (K) is uniquely determined by its normal components
on the edges e i denotes the outer normal
vector of K. In particular, the conformity of the approximation is guaranteed
by specifying the basis in such a way that continuity of the normal components
is satisfied across interelement boundaries. Consequently, we have dimV
the standard mixed discretization of (2.3) is given by:
Find (j h ; u h h such that
For D
'\Omega we denote by (\Delta; \Delta) k;D , k - 0, the standard inner products and
by k \Delta k k;D the associated norms on the Sobolev spaces H k (D) and
respectively. For simplicity, the lower index D will be omitted if
it is well known that assuming u
the following a
priori error estimates hold true
where h stands as usual for the maximum diameter of the elements of T h and
C is a positive constant independent of h, u and j (cf. e.g. [1]; Thm. 1.1).
We further observe that the algebraic formulation of (2.5) gives rise to a linear
system with coefficient matrix
which is symmetric but indefinite. There exist several efficient iterative solvers
for such systems, for example those proposed by Bank et al. [8], Cowsar [15],
Ewing and Wang [23, 24, 25], Mathew [32], Rusten and Winther [43] and
Vassilevski and Wang [46]. However, we will follow an idea suggested by
Fraeijs de Veubeke [26] and further analyzed by Arnold and Brezzi in [1] (cf.
also [12]). Eliminating the continuity constraints (2.4) from V h results in the
nonconforming Raviart-Thomas space -
ae
oe
Since there are now two basic vector fields associated with each e
h , we
have -
h . Instead, the continuity constraints are taken
care of by Lagrangian multipliers living in M h := M 0
and
Then the nonconforming mixed discretization of (2.3) is to find (j h ; u
h \Theta W h \Theta M h such that
are given
by
R
R
R
As shown in [1] the above multiplier technique has two significant advantages.
The first one is some sort of a superconvergence result concerning the approximation
of the solution u in (2.1) in the L 2 -norm while the second one is
related to the specific structure of (2.7) and has an important impact on the
efficiency of the solution process. To begin with the first one we denote by \Pi h
the L 2 -projection onto M h . Then it is easy to see that there exists a unique
(cf. [1] Lemma 2.1). The function -
represents a nonconforming interpolation
of - h which can be shown to provide a more accurate approximation of u in
the L 2 -norm. In particular, if u
1(\Omega\Gamma then there exists a
constant c ? 0 independent of h, u and j such that
(cf. [12] Theorem 3.1, Chap. 5). The preceding result will be used for the
construction of a local a posteriori error estimator to be developed in Section
4.
As far as the efficient solution of (2.7) is concerned we note that the algebraic
formulation leads to a linear system with a coefficient matrix of the formB @
In particular, -
A stands for a block-diagonal matrix, each block being a 3 \Theta 3
matrix corresponding to an element K 2 T h . Hence, -
A is easily invertible
which suggests block elimination of the unknown flux (also known as static
resulting in a 2 \Theta 2 block system with a symmetric, positive definite
coefficient matrix. This linear system is equivalent to a modified nonconforming
approximation involving the lowest order Crouzeix-Raviart elements
augmented by cubic bubble functions. Denoting by m e the midpoint of an
and we set
Note, that dimCR
Further, we denote
by P h and -
P c the L 2 -projections onto W h and -
the latter with respect to
the weighted L 2 -inner product (\Delta; \Delta) As shown in [1], (Lemma 2.3
and Lemma 2.4) there exists a unique \Psi h 2 N h such that
Originally, Lemma 2.4 is only proved for b j 0 but the result can be easily
generalized for functions b - 0. Moreover, \Psi h is the unique solution of the
variational problem
where the bilinear form aN h
is given by
Z
We will solve (2.11) numerically by preconditioned cg-iterations using a multilevel
preconditioner of BPX-type. The construction of that preconditioner
will be dealt with in the following section.
3 Iterative solution by multilevel preconditioned
cg-iterations.
We assume a hierarchy (T k
k=0 of possibly highly nonuniform triangulations
of\Omega obtained by the refinement process due to Bank et al. [6] based on
regular refinements (partition into four congruent subtriangles) and irregular
refinements (bisection). For a detailed description including the refinement
rules we refer to [5] and [17]. We remark that the refinement rules are such
that each K 2 geometrically similar either to an element of
or to an irregular refinement of a triangle in T 0 . Consequently, there exist
constants depending only on the local geometry of T 0 such that
for all K 2 its edges e ae @K
Moreover, the refinement rules imply the property of local quasiuniformity,
i.e., there exists a constant depending only on the local geometry of T 0
such that for all K;K
where hK := diamK.
We consider the modified nonconforming approximation (2.11) on the highest
level j
and we attempt to solve (3.3) by preconditioned cg-iterations. The preconditioner
will be constructed by means of the natural splitting of N j into the
standard nonconforming part CR j := CR h j and the "bubble" part B
and a further multilevel preconditioning of BPX-type for the nonconforming
part. For that purpose we introduce the bilinear form a CR j
a CR j (u CR
aj K (u CR
is the standard bilinear form associated with
the primal variational formulation (2.1)
Z
\Omega
(aru
In the sequel we will refer to A : H 1
0(\Omega\Gamma as the operator associated
with the bilinear form a.
Further, we define the bilinear form a
(w B
Z
a -
for all w B
. Denoting by AD j , D g, the operators
associated with aD j
, we will prove the spectral equivalence of AN j and ACR
. To this end we need the following technical lemmas:
Lemma 3.1 For all u CR
there holds
Proof. For the reference triangle -
K with vertices (0; 0), (1; 0) and (0; 1) it is
easy to establish
(3.7) can be deduced by the affine equivalence of the Crouzeix-Raviart elements
Lemma 3.2 For all w B
there holds
Proof. Since
are the barycentric
coordinates of K, we have
Denoting by - i
the local basis of -
h and by ( -
the
matrix representation of -aj K in case
stands for the vertex opposite to e i , by Green's
3: (3.11)
If we consider the reference triangle -
K where the vertices are given by (0; 0),
refers to the usual partial order on the set of symmetric, positive
definite matrices. Moreover, taking advantage of the affine equivalence of the
Raviart-Thomas elements it is easy to show
Using (3.1), (3.11) and (3.12) in (3.10) and observing (3.9) it follows that
We assume a and b to be locally constant, i.e., a ij
and we denote by ff 0;K and ff 1;K the lower and
upper bounds arising in (2.2) when restricting a to K. We further suppose
that a and b are such that
min
- 0: (3.13)
Note, that only for simplicity we have chosen the strong inequality (3.13).
All results can be extended to the more general case that a constant c ? 0,
independent of K exists such that for all K 2 ch 2
holds.
Under the assumption (3.13) there holds:
Theorem 3.3 Under the assumption (3.13) there exist constants
depending on the local bounds ff l;K , l 2 f0; 1g, K 2 T j , such that for all
with
a CR j
(w B
a CR j (u CR
Proof. For the proof of the preceding result we use the following lemma which
can easily established.
Lemma 3.4 For all there hold
ff 0;K
(aru CR
(3.15 a)
Proof. Using the Cauchy-Schwarz inequality we obtain
j as well as the orthogonality (ru CR
we obtain
ff 0;K
(aru CR
The following inequality deduces (3.15 b)
On the other hand, in view of kP h j u CR
0;k we have
(bu CR
Combining (3.15 a) and (3.16 a) gives the upper bound in (3.14) with c
ff 1;K
ff 0;K
. Further, by Young's inequality,
(bu CR
(bu CR
Consequently, using (3.15 b), (3.16 b), (3.13) and
ff 1;K
(aru CR
(bu CR
which yields the lower bound in (3.14) with c
ff 0;K
ff 1;K
We note that the bilinear form a B j gives rise to a diagonal matrix which thus
can be easily used in the preconditioning process. On the other hand, the bi-linear
form a CR j
corresponds to the standard nonconforming approximation of
(2.1) by the lowest order Crouzeix-Raviart elements. Multilevel preconditioner
for such nonconforming finite element discretizations have been developed by
Oswald [39, 40], Zhang [53] and the authors [30, 49]. Here we will use a BPX-
type preconditioner based on the use of a pseudo-interpolant which allows to
identify with a closed subspace of the standard conforming ansatz space
with respect to the next higher level. More precisely, we denote by T j+1 , the
triangulation obtained from T h by regular refinement of all elements
and we refer to S k ae H 1
as the standard conforming ansatz
space generated by continuous, piecewise linear finite elements with respect to
the triangulation T k . Denoting by N 0
j+1 the set of interior vertices of T j+1 and
recalling that the midpoints m e of interior edges
j correspond to vertices
j+1 , we define a mapping P CR
are the midpoints of those interior edges
having
j+1 as a common vertex. We note that this pseudo-interpolant has been
originally proposed by Cowsar [15] in the framework of related domain decomposition
techniques. The following result will lay the basis for the construction
of the multilevel preconditioner:
Lemma
j be the pseudo-interpolant given by (3.17). Then there
exist constants depending only on the local geometry of T 0 such
that for all
Proof. The assertion follows by arguing literally in the same way as in [15]
(Theorem 2) and taking advantage of the local quasiuniformity of the triangulations
It follows from (3.18) that ~
represents a closed subspace of
being isomorphic to CR j . Based on this observation we may now use the
well known BPX-preconditioner for conforming discretizations with respect to
the hierarchy (S k ) j+1
k=0 of finite element spaces (cf. e.g. [10], [11], [16], [41], [50],
[52], and [53]). We remark that for a nonvanishing Helmholtz term in (2.1) the
initial triangulation T 0 should be chosen in such a way that the magnitude of
the coefficients of the principal part of the elliptic operator is not dominated
by the magnitude of the Helmholtz coefficient times the square of the maximal
diameter of the elements in T 0 (cf. e.g. [37], [51]).
Denoting by \Gamma k := fOE (k)
, the set of nodal basis functions
of the BPX-preconditioner is based on the following
structuring of the nodal bases of varying index k:
We introduce the Hilbert space
Y
equipped with the inner product
where
, and we consider the bilinear form
denoting by ~
the operator associated with ~ b. We further define a
mapping
and refer to R
V as its adjoint in the sense that (R V u; v)
. Then the BPX-preconditioner is given by
satisfying
with constants depending only on the local geometry of T 0 and
on the bounds for the data a, b in (2.2).
The condition number estimates (3.23) have been established by various authors
(cf. [10], [16], [38]). They can be derived using the powerful Dryja-
theory [18] of additive Schwarz iterations. Another approach due to
Oswald [41] is based on Nepomnyaschikh's fictitious domain lemma:
Lemma 3.6 Let S and V be two Hilbert spaces with inner products (\Delta; \Delta) S and
generated by symmetric, positive definite operators A S : S 7\Gamma! S and ~
. Assume that there exist a linear operator
necessarily linear) operator
R
a S (Rv; Rv) - c 1
Then there holds
c 0 a S (u; u) - a S (R ~
is the adjoint of R in the sense that (Rv; u)
Proof. See e.g. [35].
In the framework of BPX-preconditioner with a S being the bilinear
form in (3.5) while V , ~ b and R are given by (3.19), (3.20) and
(3.21), respectively. The estimate (3.24 b) is usually established by means of a
strengthened Cauchy-Schwarz inequality. Further, is an appropriately
chosen decomposition operator such that the P.L. Lions type estimate (3.24 c)
holds true (cf. e.g. [41] Chapter 4).
Now, returning to the nonconforming approximation we define I S
CR j by
I S
. Note that in view of (3.17)
the operators I S
j corresponds to the identity on CR j . Then, with C as
in (3.22) the operator
is an appropriate BPX-preconditioner for the nonconforming discretization of
(2.1). In particular, we have:
Theorem 3.7 Let CNC be given by (3.25). Then there exist positive constants
depending only on the local geometry of T 0 and on the bounds for the
coefficients a, b in (2.2) such that for all u 2 CR j
Proof. In view of the fictitious domain lemma we choose
as in (3.4) and V and ~ b according to (3.19), (3.20). Furthermore, we specify
with T S as the decomposition operator in the conforming setting.
Obviously
Moreover, using the obvious inequality
a CR j
and (3.24 b), for all v 2 V we have
a CR j
(Rv;
Finally, using again (3.18) and (3.24 c) for
get
~
1 a CR j (P CR
(3.27 c)
In terms of (3.27 a-c) we have verified the hypotheses of the fictitious domain
lemma which gives the assertion.
4 A posteriori error estimation.
Efficient and reliable error estimators for the total error providing indicators
for local refinement of the triangulations are an indispensable tool for efficient
adaptive algorithms. Concerning the finite element solution of elliptic
boundary value problems we mention the pioneering work done by Babuska
and Rheinboldt [2, 3] which has been extended among others by Bank and
Weiser [7] and Deuflhard, Leinen Yserentant [17] to derive element-oriented
and edge-oriented local error estimators for standard conforming approxima-
tions. We remark that these concepts have been adapted to nonconforming
discretizations by the authors in [29, 30] and [49]. The basic idea is to discretize
the defect problem for the available approximation with respect to a
finite element space of higher accuracy. For a detailed representation of the
different concepts and further references we refer to the monographs of Johnson
[31], Szabo and Babuska [45] and Zienkiewicz and Taylor [54] (cf. also the
recent survey articles by Bornemann et al. [9] and Verf?rth [47, 48]).
In this section we will derive an error estimator for the L 2 -norm of the total
error in the primal variable u based on the superconvergence result (2.9). As
we shall see this estimator does not require the solution of an additional defect
problem and hence is much more cheaper than the estimators mentioned
above. We note, however, that an error estimator for the total error in the
flux based on the solution of localized defect problems has been developed by
the first author in [28].
We suppose that ~
is an approximation of the solution / h 2 N h
of (2.11) obtained, for example, by the multilevel iterative solution process
described in the preceding section. Then, in view of (2.7) and (2.10) we
get an approximation ( ~
h \Theta W h \Theta M h of the unique solution
(j
h \Theta W h \Theta M h of (2.7) by means of
~
and
The last equality is obtained due to:
R
R
R
R
Further, we denote by - ~ u h 2 CR h the nonconforming extension of ~ - h .
In lights of the superconvergence result (2.9) we assume the existence of a
constant
In other words, (4.3) states that the nonconforming extension - u h of - h does
provide a better approximation of the primal variable u than the piecewise
constant approximation u h .
It is easy to see that (4.3) yields
(4.
Observing (2.10) and (4.1), we have
K2T h3
K2T h3
ii
q3
Using (4.5), (4.6) in (4.4) we get
q3
We note that k/ h \Gamma ~
represents the L 2 - norm of the iteration error whose
actual size can be controlled by the iterative solution process. Therefore, the
provides an efficient and reliable error estimator for the L 2 -
norm of the total error whose local contributions k~u
be used as indicators for local refinement of T h . Moreover, the estimator can
be cheaply computed, since it only requires the evaluation of the available
approximations ~ u h 2 W h and ~
For a better understanding of the estimator the rest of this section will be
devoted to show that it is equivalent to a weighted sum of the squares of the
jumps of ~
across the edges e 2 E h . For that purpose we introduce the jump
and the average of piecewise continuous functions v along edges In
particular, for e
h we denote by K in and K out the two adjacent triangles
and by n e the unit normal outward from K in . On the other hand, for
we refer to n e as the usual outward normal. Then, we define the average [v] A
of v on e 2 E h and the jump [v] J of v on e 2 E h according to
(vj K in
It is easy to see that for piecewise continuous functions u, v there holds
Z
Z
e
(uj K in \Delta vj K in
Z
e
(uj K in \Delta vj K in \Gamma uj Kout \Delta vj Kout
Z
e
Further, we observe that for vector fields q the quantity [n \Delta q] J is independent
of the choice of K in and K out .
In terms of the averages [n e \Delta q h ] A and the jumps [n e \Delta q h ] J we may decompose
the nonconforming Raviart-Thomas space -
into the sum
where the subspaces -
h and -
H are given by
Obviously, we have -
g. As the main result of this section we will prove:
Theorem 4.1 Let ( ~ j h
h \Theta W h \Theta M h be an approximation of the
unique solution of (2.6) obtained according to (4.1), (4.2) and let - ~
be the nonconforming extension of ~
- h . Then there exist constants
depending only on the shape regularity of T h and the ellipticity constants in
(2.2) such that
The proof of the preceding result will be provided in several steps. Firstly, due
to the shape regularity of T h we have:
Lemma 4.2 Under the assumptions of Theorem 4.1 there holds3
'i
'i
Proof. By straightforward computation
~
jK in j
~
~
which easily gives (4.12) by taking advantage of (3.1).
As a direct consequence of Lemma 4.1 we obtain the lower bound in (4.11)
with oe
However, the proof of the upper bound is more elaborate. In
view of (4.12) it is sufficient to show that
ii
holds true with an appropriate positive constant c. As a first step in this direction
we will establish the following relationship between ~ - h and the averages
and jumps of ~
Lemma 4.3 Under the assumptions of Theorem 4.1 for all q h
V h there holds
ii
where P c denotes the projection onto V h with respect to the weighted L 2 -inner
product (\Delta; \Delta) 0;c .
Proof. We denote by ~
OE h the unique element in B h satisfying
Z
~
Z
~
In view of (4.2) we thus have
K2T h@
Z
Z
By Green's formula, observing ~
Z
Z
and hence
Z
\Omega
c
ar ~
which shows that
Consequently, for q h
Z
Z
cP c (ar ~
Z
r ~
Z
~
Z
~
Z
~
It follows from (4.2) that
Z
~
which by (4.8 b) is clearly equivalent to the assertion.
For a particular choice of q h 2 -
V h in Lemma 4.2 we obtain an explicit representation
of ~
- h on e
h . We choose q
(- K in
and - K in
e are the standard basis vector fields in -
h with support in K in
resp. K out given by
Corollary 4.4 Let the assumptions of Lemma 4.2 be satisfied and let - e 2 -
h , be given by (4.15). Then there holds
~
Proof. Observing [n e 0
the assertion is a direct consequence of (4.14).
Moreover, with regard to (4.13) we get:
Corollary 4.5 Under the assumption of Lemma 4.2 there holds@ X
ii
Proof. Since for each - h 2 M h (E h ) there exists a unique q h 2 -
A
satisfying
by means of (4.14) we get
ii
A
A
which gives (4.17) by the Schwarz inequality.
The preceding result tells us that for the proof of (4.13) we have to verify
A
Since (4.18) obviously holds true for q h 2 -
, it is sufficient to show:
Lemma 4.6 Let the assumptions of Lemma 4.2 satisfied. Then there holds
A
Proof. We refer to A, -
A and P c as the matrix representations of the operators
A and P c . With respect to the standard bases of V h and -
we may identify
vectors q
respectively. We remark that q h 2 -
A
iff q K in
h , and q K
h . It follows that for q h
A
(q K in
Obviously
stands for the spectral radius of P c \Delta P T
c . Denoting by S the
natural embedding of V h into -
h and by S its matrix representation, it is easy
to see that
A
whence
ASA
ASA
We further refer to AK and -
AK as the local stiffness matrices. Using (2.2)
and (3.12), we get
AKg
with
. Consequently, introducing the local vectors
it follows that
(jK in
e
(jK in
Using (4.24), (4.25) in (4.23) we find
which gives (4.19) in view of (4.20), (4.21) and (4.22).
Summarizing the preceding results it follows that the upper estimate in (4.11)
holds true with oe
s
5 Numerical results.
In this section, we will present the numerical results obtained by the application
of the adaptive multilevel algorithm to some selected second order elliptic
boundary problems. In particular, we will illustrate the refinement process
as well as the performance of both the multilevel preconditioner and the a
posteriori error estimator. The following model problems from [5] and [20]
have been chosen as test examples:
Problem 1. Equation (2.1) with on the unit square
with the right-hand side f and the Dirichlet boundary conditions
according to the solution u(x;
which has a boundary layer along the lines Fig. 5.1).
Problem 2. Equation (2.1) with the right-hand side f j 0 and a hexagon
\Omega with corners (\Sigma1; 0),
. The coefficients are chosen according
to b j 0 and a(x; y) being piecewise constant with the values 1 and 100
on alternate triangles of the initial triangulation (cf. Fig. 5.2). The solution
given by u(x; continuous with a jump discontinuity of
the first derivatives at the interfaces.
Starting from the initial coarse triangulations depicted in Figures 5.1 and 5.2,
on each refinement level l the discretized problems are solved by preconditioned
cg-iterations with a BPX-type preconditioner as described in Section 3. The
iteration on level l stopped when the estimated iteration error " l+1 is
l
l
, with the safety factor l denotes the
estimated error on level l, the number of nodes on level l and l +1 are given by
N l and N l+1 , respectively. Denoting by ( ~ j l ; ~
resulting approximation
and by -
~ u l the nonconforming extension of ~ - l , for the local refinement of T l the
error contributions ffl 2
~
and the weighted
mean value
K2T l
K are computed. Then, an element K 2 T l is
marked for refinement if jKj
oe is a safety factor which is
chosen as 0:95. The interpolated values of the level l approximation are
used as startiterates on the next refinement level. For the global refinement
process we use -ffl 2
0;\Omega as stopping criteria, where ff is a safety
factor which is chosen as and tol is the required accuracy,
Level 0,
Figure
5.1: Initial triangulation T 0 and final triangulation T 6 (Problem 1)
Level 0,
Figure
5.2: Initial triangulation T 0 and final triangulation T 5 (Problem 2)
Figures
5.1 and 5.2 represent the initial triangulations T 0 and the final triangulations
T 6 and T 5 for Problems 1 and 2, respectively. For Problem 1 we observe
a pronounced refinement in the boundary layer (cf. Fig. 5.1). For Problem 2
there is a significant refinement in the areas where the diffusion coefficient is
large with a sharp resolution of the interfaces between the areas of large and
small diffusion coefficient (cf. Fig. 5.2).1.11.31.51.710 100 1000 10000
estimated
error/true
error
Number of nodes
Boundary layer
Discontinuous coefficients
Figure
5.3: Error Estimation for Problem 1 and 25152535450 10000 20000 30000 40000 50000
Number
of
cg-iterations
Number of nodes
Boundary layer
Discontinuous coefficients
Figure
5.4: Preconditioner for Problem 1 and 2
The behaviour of the a posteriori L 2 -error estimator is illustrated in Figure
5.3 where the ratio of the estimated error and the true error is shown as a
function of the total number of nodes. The straight and the dashed lines refer
to Problem 1 (boundary layer) and Problem 2 (discontinuous coefficients),
respectively. In both cases we observe a slight overestimation at the very beginning
of the refinement process, but the estimated error rapidly approaches
the true error with increasing refinement level.
Finally, the performance of the preconditioner is depicted in Figure 5.4 displaying
the number of preconditioned cg-iterations as a function of the total
number of nodal points. Note that for an adequate representation of the performance
we use zero as initial iterates on each refinement level and iterate
until the relative iteration error is less than In both cases, we observe
an increase of the number of iterations at the beginning of the refinement
process until we get into the asymptotic regime where the numerical results
confirm the theoretically predicted O(1) behaviour.
--R
Mixed and nonconforming finite element methods: im- plementation
estimates for adaptive finite element compu- tations
A posteriori error estimates for the finite element method.
Refinement algorithm and data structures for regular local mesh refinement.
Some a posteriori error estimators for elliptic partial differential equations.
A class of iterative methods for solving saddle point problems.
A basic norm equivalence for the theory of multilevel methods.
Parallel multilevel preconditioners.
Mixed and Hybrid Finite Element Methods.
Two dimensional exponential fitting and application to drift-diffusion models
Domain decomposition and mixed finite elements for the neutron diffusion equation.
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
Concepts of an adaptive hierarchical finite element code.
Towards a unified theory of domain decomposition alogrithms for elliptic problems.
Version 2.0.
Local refinement via domain decomposition techniques for mixed finite element methods with rectangular Raviart-Thomas elements
Convergence analysis of an approximation of miscible displacement in porous media by mixed finite elements and a modified method of characteristics.
The Schwarz algorithm and multilevel decomposition iterative techniques for mixed finite element methods.
Analysis of multilevel decomposition iterative methods for mixed finite element methods.
Analysis of the Schwarz algorithm for mixed finite element methods.
Mixed finite element discretization of continuity equations arising in semiconductor device simulation.
Adaptive mixed finite elements methods using flux- based a posteriori error estimators
Adaptive multilevel iterative techniques for nonconforming finite elements discretizations.
Numerical Solutions of Partial Differential Equations by the Finite Element Method.
Schwarz alternating and iterative refinement methods for mixed formulations of elliptic problems.
Schwarz alternating and iterative refinement methods for mixed formulations of elliptic problems.
Decomposition and fictitious domain methods for elliptic boundary value problems.
Some aspects of mixed finite elements methods for semiconductor simulation.
Two remarks on multilevel preconditioners.
On a hierarchical basis multilevel method with nonconforming P1 elements.
On discrete norm estimates related to multilevel preconditioner in the finite elements methods.
On a BPX-preconditioner for P1 elements
Multilevel finite element approximation: Theory and Application.
Multigrid applied to mixed finite elements schemes for current continuity equations.
A preconditioned iterative method for saddle point problems.
Mixed finite elements methods for flow through unstructed porous media.
Finite Element Analysis.
Multilevel approaches to nonconforming finite elements discretizations of linear second order elliptic boundary value problems.
Iterative methods by space decomposition and subspace correction.
Hierarchical bases in the numerical solution of parabolic problems.
Old and new convergence proofs for multigrid methods.
The Finite Element Method
--TR | mixed finite elements;multilevel preconditioned CG iterations;a posteriori error estimator |
262640 | Decomposition of Gray-Scale Morphological Templates Using the Rank Method. | AbstractConvolutions are a fundamental tool in image processing. Classical examples of two dimensional linear convolutions include image correlation, the mean filter, the discrete Fourier transform, and a multitude of edge mask filters. Nonlinear convolutions are used in such operations as the median filter, the medial axis transform, and erosion and dilation as defined in mathematical morphology. For large convolution masks or structuring elements, the computation cost resulting from implementation can be prohibitive. However, in many instances, this cost can be significantly reduced by decomposing the templates representing the masks or structuring elements into a sequence of smaller templates. In addition, such decomposition can often be made architecture specific and, thus, resulting in optimal transform performance. In this paper we provide methods for decomposing morphological templates which are analogous to decomposition methods used in the linear domain. Specifically, we define the notion of the rank of a morphological template which categorizes separable morphological templates as templates of rank one. We establish a necessary and sufficient condition for the decomposability of rank one templates into 3 3 templates. We then use the invariance of the template rank under certain transformations in order to develop template decomposition techniques for templates of rank two. | INTRODUCTION
OTH linear convolution and morphological methods are
widely used in image processing. One of the common
characteristics among them is that they both require applying
a template to a given image, pixel by pixel, to yield a
new image. In the case of convolution, the template is usually
called convolution window or mask; while in mathematical
morphology, it is referred to as structuring element.
Templates used in realizing linear convolutions are often
referred to as linear templates. Templates can vary greatly in
their weights, sizes, and shapes, depending on the specific
applications.
Intuitively, the problem of template decomposition is
that given a template t, find a sequence of smaller templates
K n such that applying t to an image is equivalent to
applying t t
sequentially to the image. In other
words, t can be algebraically expressed in terms of
One purpose of template decomposition is to fit the support
of the template (i.e., the convolution kernel) optimally
into an existing machine constrained by its hardware con-
figuration. For example, ERIM's CytoComputer [1] cannot
deal with templates of size larger than 3 - 3 on each pipe-line
stage. Thus, a large template, intended for image processing
on a CytoComputer, has to be decomposed into a
sequence of 3 - 3 or smaller templates.
A more important motivation for template decomposition
is to speed up template operations. For large convolution
masks, the computation cost resulting from implementation
can be prohibitive. However, in many instances,
this cost can be significantly reduced by decomposing the
masks or templates into a sequence of smaller templates.
For instance, the linear convolution of an image with a
gray-valued multiplications and
- additions to compute a new image pixel value; while
the same convolution computed with an 1 - n row template
followed by an n - 1 column template takes only 2n multiplications
a f additions for each new image pixel
value. This cost saving may still hold for parallel architectures
such as mesh connected array processors [2], where
the cost is proportional to the size of the template.
The problem of decomposing morphological templates has
been investigated by a host of researchers. Zhuang and
Haralick [3] gave a heuristic algorithm based on tree search
that can find an optimal two-point decomposition of a
morphological template if such a decomposition exits. A
two-point decomposition consists of a sequence of templates
each consisting of at most two points. A two-point
decomposition may be best suited for parallel architectures
with a limited number of local connections since each two-point
template can be applied to an entire image in a multi-
ply-shift-accumulate cycle [2]. Xu [4] has developed an al-
gorithm, using chain code information, for the decomposition
of convex morphological templates for two-point system
configurations. Again using chain-code information,
Park and Chin [5] provide an optimal decomposition of
convex morphological templates for four-connected
meshes. However, all the above decomposition methods
work only on binary morphological templates and do not
extend to gray-scale morphological templates.
A very successful general theory for the decomposition
. The authors are with the University of Florida, Gainesville, FL 32611.
E-mail: {ps0; ritter}@cis.ufl.edu.
Manuscript received Nov. 9, 1995; revised Mar. 14, 1997. Recommended for acceptance
by V. Nalwa.
For information on obtaining reprints of this article, please send e-mail to:
transpami@computer.org, and reference IEEECS Log Number 104798.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 6, JUNE 1997
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 2 11
of templates, in both the linear and morphological domain,
evolved from the theory of image algebra [6], [7], [8], [9],
[10] which provides an algebraic foundation for image
processing and computer vision tasks. In this setting, Ritter
and Gader [11], [9] presented efficient methods for decomposing
discrete Fourier transform templates. Zhu and Ritter
[12] employ the general matrix product to provide novel
computational methods for computing the fast Fourier
transform, the fast Walsh transform, the generalized fast
Walsh transform, as well as a fast wavelet transform.
In image algebra, template decomposition problems, for
both linear and morphological template operations, can be
reformulated in terms of corresponding matrix or polynomial
factorization. Manseur and Wilson [13] used matrix as
well as polynomial factorization techniques to decompose
two-dimensional linear templates of size m - n into sums
and products of 3 - 3 templates. Li [14] was the first to investigate
polynomial factorization methods for morphological
templates. He provides a uniform representation of
morphological templates in terms of polynomials, thus reducing
the problem of decomposing a morphological template
to the problem of factoring the corresponding poly-
nomials. His approach provides for the decomposition of
one-dimensional morphological templates into factors of
two-point templates. Crosby [15] extends Li's method to
two-dimensional morphological templates.
Davidson [16] proved that any morphological template
has a weak local decomposition for mesh-connected array
processors. Davidson s existence theorem provides a theoretical
foundation for morphological template decomposi-
tion, yet the algorithm conceived in its constructive proof is
not very efficient. Takriti and Gader formulate the general
problem of template decomposition as optimization problems
[17], [18]. Sussner, Pardalos, and Ritter [19] use a
similar approach to solve the even more general problem of
morphological template approximation. However, since
these problems are inherently NP-complete, researchers try
to exploit the special structure of certain morphological
templates in order to find decomposition algorithms. For
example, Li and Ritter [20] provide very simple matrix
techniques for decomposing binary as well as gray-scale
linear and morphological convex templates. A separable
template is a template that can be expressed in terms of two
one-dimensional templates consisting of a row and a column
template. Gader [21] uses matrix methods for decomposing
any gray-scale morphological template into a sum of
a separable template and a totally nonseparable template. If
the original template is separable, then Gader s decomposition
yields a separable decomposition. If the original template
is not separable, then his method yields the closest
separable template to the original in the mean square sense.
Separable templates are particularly easy to decompose
and the decomposition of separable templates into a product
of vertical and horizontal strip templates can be used as
a first step for the decomposition into a form which
matches the neighborhood configuration of a particular
parallel architecture. In the linear case, separable templates
are also called rank one templates since their corresponding
matrices are rank one matrices. O Leary [22] showed that
any linear template of rank one can be factored exactly into
a product of 3 - 3 linear templates. Templates of higher
rank are usually not as efficiently decomposable. However,
the rank of a template determines upper bounds of worst-case
scenarios. For example, a linear template of rank two
always decomposes into a sum of two separable templates.
In the linear domain, the notion of template rank stems
from the well known concept of matrix rank in linear alge-
bra. The purpose of this paper is to develop the notion of a
morphological matrix rank similar to the linear matrix rank.
By way of bijection, matrices correspond to certain rectangular
templates. In analogy to the linear case, we define the
rank of a morphological template as the rank of the corresponding
matrix. We demonstrate that this notion allows
for an elegant and concise formulation of some new results
concerning the decomposition of gray-scale morphological
templates into separable morphological templates.
The paper is organized as follows. In Section 2, we introduce
the image algebra notation used throughout this
paper and in most of the aforementioned algebraic template
decomposition methods. In Section 3, we develop the notions
of linear dependence, linear independence and rank
pertinent to morphological image processing. In Section 4,
we establish general theorems for the separability of matrices
in the morphological domain. Finally, in Section 5, we
apply the result of the previous sections and establish decomposition
criteria, methods, and algorithms for the decomposition
of gray-scale morphological templates. Proofs
of theorems are given in [23] so as not to obscure the main
ideas and results of this paper.
Image algebra is a heterogeneous or many-valued algebra in
the sense of Birkhoff and Lipson [24], [6], with multiple sets
of operands and operators. In a broad sense, image algebra
is a mathematical theory concerned with the transformation
and analysis of images. Although much emphasis is focused
on the analysis and transformation of digital images,
the main goal is the establishment of a comprehensive and
unifying theory of image transformations, image analysis,
and image understanding in the discrete as well as the continuous
domain [6], [8], [7]. In this paper, however, we restrict
our attention only to the notations and operations that
are necessary for establishing the results mentioned in the
introduction. Hence, our focus is on morphological image
algebra operations.
Henceforth, let X be a subset of the digital plane
Z Z
c h
denotes the set of integers. For
any set F, we denote the set of all functions from X into F by
F X . We use the symbols / and Y- to denote the binary operations
of maximum and minimum, respectively.
2.1 Images and Templates
From the image algebra viewpoint, images are considered
to be functions and templates are viewed as functions
whose values are images. In particular, an F-valued image a
over the point set X is a function a X a X
i. e. ,
while an F-valued template t on X is a function
SUSSNER AND RITTER: DECOMPOSITION OF GRAY-SCALE MORPHOLOGICAL TEMPLATES USING THE RANK METHOD 3
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 3 11
i. e. , e j . For notational convenience, we
define t y as t(y) for all y OE X. Note that the image t y has
representation
y y
a f
{ } (1)
where the pixel values t x
y
a f at location x of this image are
called template weights at point y.
Since we are concerned with optimizing morphological
convolutions, the set F of interest will be the real numbers
with the symbol -. appended. More precisely,
F R R
-.
denotes the set of real num-
bers. The algebraic system associated with R -. will be the
semi-lattice-ordered group R -.
c h with the extended
arithmetic and logic operations defined as follows:
a a a
a a a a
-.
-.
a f a f
a f a f
R
R (2)
Note that the element -. acts as a null element in the system
R -.
c h if we view the operation + as multiplication
and the operation / as addition. The dual of this system is
the lattice ordered group R +. Y-
c h . The algebraic system
R -.
c h provides the mathematical environment for the
morphological operation of gray scale dilation, while
R +. Y-
c h provides the environment for the dual operation
of gray scale erosion.
Our focus will be on translation invariant R -valued
templates over X since gray-scale structuring elements can
be realized by these templates. A template t X X
OE -.
R
e j is
called translation invariant if and only if
y z y
a f a f , , Z 2 (3)
are elements of X. The support of a
template
OE -.
R
e j at a point y is denoted by S(t y ) and defined
as follows:
y y
A translation invariant template t is called rectangular, if
rectangular discrete array.
EXAMPLE. Let r X X
OE -.
R
e j be the translation invariant template
which is determined at each point y OE X by the
following function values of x OE X:
r x
x y
x y
x y
y
a f
-.
R
| |
| |
l l
l l for some l
l l
if
if
if
else
we can visualize the rectangular template
r as shown in Fig. 1.
Fig. 1. The support of the template r at point y. The hashed cell indicates
the location of the target point
2.2 Basic Operations
The basic operations of addition and maximum on R -. induce
pixelwise operations on R -valued images and tem-
plates. For any a b X
, OE -.
R and any s t X X
, OE -.
R
a b x a x b x x X
a b x a x b x x X
y y
y y
a fa f a f a f
a fa f a f a f
a f
a f
y
y
If c X
OE -.
R denotes the constant image
a f
c h a f
for some c OE -.
R , then scalar operations on images and
templates can be obtained by defining
c c
c c
c c
c c
a a a c
a a a c
y y y
y y y
a f a f
a f a f
2.3 Additive Maximum Operations
Forming the additive maximum (*) of an image a X
OE -.
R and
a template t X X
OE -.
R
results in the image a t X
* OE -.
R , which
is determined by the following function values.
a t y a x t x
OE
a fb g a f a f (8)
Clearly, each template t X X
OE -.
R
defines a function
4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 6, JUNE 1997
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 4 11
a a t
-.
a
The additive maximum of a template t X X
OE -.
R
e j and a template
OE -.
R
e j is defined as the template r X X
OE -.
R
e j which
determines f f
, the composition of f t followed by f s .
Specifically,
OE
a f a f a f a f
These relationships induce the associative and distributive
laws given later. Note that for any constant c OE -.
R
a f a f a f
EXAMPLE. The following column templates r s t X X
R
t.
Fig. 2. The template r constitutes the additive maximum of the templates
s and the template t.
2.4 Some Properties of Image
and Template Operations
The following associative and distributive laws hold for an
arbitrary image a X
OE -.
R and arbitrary templates t X X
OE -.
R
and s X X
OE -.
R
a s t a s t
a s t a s a t
a f a f
a f a f a f (12)
These results establish the importance of template decomposition
2.5 Strong Decompositions of Templates
A sequence of templates t t
e j in R -.
e j is called a
(strong) decomposition (with respect to the operation "*") of a
template
OE -.
R
OE -.
R
e j can be written in the form
In the special case where we speak of a separable template
if the support of t 1 is a one dimensional vertical array
and the support of t 2 is a one dimensional horizontal array.
EXAMPLE. The template r X X
OE -.
R
e j given in Fig. 1 represents
a separable template since this template decomposes
into a vertical strip template s X X
OE -.
R
e j and a horizontal
strip template t X X
OE -.
R
Fig. 3. Pictorial representation of a column template s and a row template
t.
2.6 Weak Decompositions of Templates
A sequence of templates t t
e j in R -.
with a strictly increasing sequence of natural numbers
K is called a (weak) decomposition (with respect to the
operation "*") of a template t X X
OE -.
R
if the template t can
be represented as follows:
We say s s
e j is a weak decomposition of a rectangular
template
OE -.
R
separable templates if each s i ,
is separable and t s s
2.7 Correspondence Between Rectangular
Templates and Matrices
Note that there is a natural bijection f from the space of all
matrices over R -. into the space of all rectangular
templates in R -.
be arbitrary and x X
c h be
such that
min , ,
min ,
The image of a matrix A OE -.
R
is defined to be
SUSSNER AND RITTER: DECOMPOSITION OF GRAY-SCALE MORPHOLOGICAL TEMPLATES USING THE RANK METHOD 5
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 5 11
the template t X X
OE -.
R
e j which satisfies
y
y
y y
c h
c h
K K
Henceforth, we restrict our attention to rectangular templates
whose target pixel is centered, i.e., rectangular templates
of the above form.
The theory of minimax algebra [25] examines the algebraic
structures arising from the lattice operations
"maximum," "minimum," and "addition," including the
space of all matrices over R -. together with the operation
additive maximum. The natural correspondence between
rectangular templates in t X X
OE -.
R
e j and matrices over R -.
allows us to use a minimax algebra approach in order to
study the weak decomposability of rectangular templates
into separable templates.
EXAMPLE. Let A OE R 3-3 be the matrix and u, v the vectors
given below.
A u v
F
I
F
I
c h (17)
The function f maps A to the square template
OE -.
R
e j in Fig. 1, and it maps the column vector u
to the column template s X X
OE -.
R
e j and the row vector
v to the row template t X X
OE -.
R
e j in Fig. 3.
In this section, we develop a new notion of matrix rank
within the mathematical framework of minimax algebra.
We relate this concept of matrix rank to the one given by
Cuninghame-Green [25] and derive the notion of the rank
of a morphological template.
3.1 Algebraic Structures and Operations
in Minimax Algebra
The mathematical theory of minimax algebra deals with
algebraic structures such as bands, belts, and blogs. For
example, R -. together with the operations of maximum
("/") and addition forms a belt. Cuninghame-Green defines
the matrix rank for matrices over certain subsets of the blog
R -. For our purposes it suffices to consider R, the finite
elements of R -.
Operations such as the maximum ("/"), the minimum
("Y-"), and the addition on R induce entrywise operations
- , the set of all m - n matrices over R. Minimax algebra
also defines compound operations such as "*"-
pronounced "additive maximum"-from R R
- into
- , an operation similar to the regular matrix product
known from linear algebra. (An obvious dual of this operation
is provided by the "additive minimum" operation.)
Given matrices A OE R m-k and B OE R k-n , the additive maximum
R m n is determined by
c a
ik kj
If A is a matrix in R m n
- and if u i are column vectors in R m-1
and v are row vectors in R 1-n for then the following
equivalence holds for the corresponding rectangular
template f(A), the vertical strip templates f u i
and the
horizontal strip templates f v i
A u v A u v
3.2 Linear Dependence of Vectors
A vector v OE R n is said to be linearly dependent on the vectors
OE R if and only if there exist scalars c OE R,
ce
Otherwise, the vector v OE R n is called linearly independent
from the vectors v 1 , -, v k OE R n . The vectors v 1 , -, v k OE R n
are linearly independent if each one of them is linearly independent
from the others.
EXAMPLE. Consider the following elements of R 3
F
F
F
a f , the vector v is linearly
dependent on v 1 and v 2 .
3.3 Strong Linear Independence
Vectors v v
OE R are called strongly linearly independent
(SLI) if and only if there exists a vector v OE R n such that
v has a unique representation
Since this definition does not provide a suitable criterion
for testing a collection of vectors in R n for strong linear in-
dependence, we choose to provide an alternative equivalent
definition based on the following theorem.
THEOREM 1. Vectors v v
OE R are SLI if and only if the
following inequalities hold.
where ~ v v
vfor any vector
6 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 6, JUNE 1997
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 6 11
c h R
COROLLARY. There are k vectors v 1 , -, v k OE R n which are SLI if
and only if k OE {1, -, n}.
3.4 Rank of a Matrix
Cuninghame-Green defines the rank of a matrix A OE R m-n as
the maximal number of SLI row vectors or, equivalently,
the maximum number of SLI column vectors. The rank of a
finite matrix A OE R m-n is less than or equal to min {m, n}.
3.5 Remarks on the Rank of a Matrix
in Minimax Algebra
The notion of (regular) linear independence is not suited to
define a rank in minimax algebra because certain dimensional
abnormalities would occur. For example, it is possible
to find k linearly independent vectors in R n for any
number k OE N. The notion of strong linear independence
gives rise to a satisfactory theory of rank and dimension
(although certain equivalences known from linear algebra
do not hold).
3.6 The Separable Rank of a Matrix
The separable rank of a matrix A OE R m-n is denoted by
defined as the minimal number r of column
vectors row vectors { v 1 , - v r } OE { R} 1 -
which permit a representation of A in the following form:
A u v
A representation of this form is called a (weak) separable decomposition
of A. We say A is a separable matrix (with respect
to the operation *) if rank sep
3.7 The Rank of a Rectangular Template
real valued matrix A, then we define
the rank of the template t X X
OE -.
R
e j as the separable rank of A.
Our interest in the rank of a morphological template is
motivated by the problem of morphological template decomposition
since the rank of a morphological template
OE -.
R
represents the minimal number of separable
templates whose maximum is t or, equivalently, the minimal
number r of column templates r X X
R
e j and row
templates s X X
R
e j such that t r s
r ie j .
In this section, we derive some theorems concerning the
separable rank of matrices which translate directly into results
about the rank of rectangular templates. These theorems
greatly simplify the proofs of the decomposition results
which we will present in the next section.
THEOREM 2. If a matrix A OE R m-n has a representation
A u v
l
le j in terms of column vectors u m
R 1 and
row vectors v l n
then A can be
expressed in the following form:
A
l
l
R 1 is given by
w a
l
, . , (26)
REMARK. Theorem 2 implies that, for any matrix A OE R m-n of
separable rank k, it suffices to know the row vectors
, , . , which permit a weak decomposition
of A into k separable matrices in order to determine
a representation of A in the form:
A w v
l
l l
R , . , (27)
Like most of the theorems established in this paper,
Theorem 2 has an obvious dual in terms of column
vectors which we choose to omit.
We now are going to introduce certain transforms which
preserve the separable matrix rank. These transforms are
suited to simplify the task of determining the separable
rank of a given matrix.
4.1 Column Permutations of Matrices
Let A OE R m-n and r be a permutation of {1, -, n}. The associated
column permuted matrix r c (A) of A with respect to r is
defined as follows:
r
c
a a a
a a a
a a a
A
a f
a f a f a f
a f a f a f
a f a f a f
F
I
4.2 Row Permutations
If r is a permutation of {1, -, m}, then we define the associated
row permuted matrix r r (A) of A OE R m-n with respect to r
as follows:
r
r
a a a
a a a
a a a
A
a f
a f a f a f
a f a f a f
a f a f a f
F
I
The multiplication of a matrix A OE R m-n by a scalar c OE R is
defined as usual. In this case, -A stands for (-1) # A.
THEOREM 3. The following transformations preserve the separable
rank of a matrix A OE R m-n :
SUSSNER AND RITTER: DECOMPOSITION OF GRAY-SCALE MORPHOLOGICAL TEMPLATES USING THE RANK METHOD 7
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 7 11
. column and row permutations;
. additions of separable matrices;
. scalar multiplications.
REMARK. Column and row permutations as well as additions
of separable matrices also preserve the rank of a
matrix, as defined by Cuninghame-Green. This invariance
property follows directly from the definition
of this matrix rank as the minimal number of SLI row
vectors or column vectors.
THEOREM 4. The separable rank of a finite matrix A OE R m-n is
bounded from below by the rank of A and bounded from
above by the minimal number l of linearly independent row
vectors or column vectors of A.
At this point, we are finally ready to tackle the problem of
determining weak decompositions of matrices in view of
their separable ranks. The reader should bear in mind the
consequences for the corresponding rectangular templates.
For any matrix A OE R m-n , we use the notation a(i),
-, m to denote the ith row vector of A and we use the notation
the jth column vector of A.
THEOREM 5 [20]. Let A OE R m-n be a separable matrix and {a(i) :
be the collection of row vectors of A. For each
arbitrary row vector a(i 0 ), there exist
scalars l i OE R, m, such that the following equations
are satisfied:
l
In other words, given an arbitrary index 1 - i 0 - m, each
row vector a(i) is linearly dependent on the i 0 th row vector
A.
Clearly, Li and Ritter's theorem gives rise to the following
straightforward algorithm which tests if a given matrix
over R is separable. In the separable case, the algorithm
computes a vector pair into which the given matrix can be
decomposed.
ALGORITHM 1. Let A OE R m-n be given and let a(i) denote the
ith row vector of A. The algorithm proceeds as follows
for all
Subtract a 1j from a ij .
Compare c i with a ij - a 1j . If a 1j , the matrix
A is not separable and the algorithm stops. If step
has been successfully completed, then A
is separable and A is given by c*a(1).
Note that this algorithm only involves (m - 1)n subtractions
and (m - 1)(n - 1) comparisons. Hence, the number of operations
adds up to 2(m - 1)n which implies that
the algorithm has order O(2mn).
REMARK. As mentioned earlier, the given image processing
hardware often calls for the decomposition of a given
template into 3 - 3 templates. The theorem below
shows that, in the case of a separable square template,
this problem reduces to the problem of decomposing
a column template into 3 - 1 templates as well as decomposing
a row template into 1 - 3 templates. Suppose
that the original template is of size (2n
operations per pixel are
needed when applying this template to an image. If
the template decomposes into n 3 - 3 templates, this
number of operations reduces to 9n. However, the
simple strong decomposition of the original separable
template into a row and a column template of length
especially when using
a sequential machine since only 4n operations
per pixel are required when taking advantage of this
decomposition.
THEOREM 6. Let t be a square morphological template of rank 1,
given by r is a column template and s is a
row template. The template t is decomposable into
templates if and only if r is decomposable into m - 1 templates
and s is decomposable into templates.
EXAMPLE. Let A be the real valued 5 - 5 matrix given below.
A u v
F
I
where
F
I
The template is not decomposable into two
templates since the template r = f(u) is not decomposable
into two 3 - 1 templates.
REMARK. Of course, Theorem 6 does not preclude the existence
of templates t of rank # 2 which are strongly decomposable
into 3 - 3 templates.
EXAMPLE. The following template t of rank > 1 can be written
as a *-product of two 3 - 3 templates t 1 and t 2 . See
Fig. 4 and Fig. 5.
COROLLARY. Let t be a square morphological template of rank k,
given by
-, r k are column templates and s 1 , -, s k are row tem-
plates. If the templates r 1 , -, r k are decomposable into
templates and the templates s 1 , -, s k are decomposable
then t has representation
in terms of templates t 1 , -, t k which are
strongly decomposable into templates.
8 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. 19, NO. 6, JUNE 1997
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 8 11
Fig. 4. Example of a 5 - 5 template t of rank > 1 which is decomposable
into 3 - 3 templates t 1 and t 2 .
Fig. 5. Templates t 1
and t 2
which
LEMMA 1. If a matrix A OE R m-n has separable rank two, then
there exits a transform T-consisting of only row permuta-
tions, column permutations, and additions of row or column
vectors-as well as vectors u OE R m-1 and v OE R 1-n
such that T(A) can be written in the following form:
where
and 0 m-n denotes the m - n zero matrix.
LEMMA 2. Let T be a (separable) rank preserving transform as in
Theorem 3 and A OE R m-n The transform T maps row vectors
of A to row vectors of T(A), and column vectors of A
to column vectors of T(A).
THEOREM 7. A matrix A OE R m-n has separable rank two if and
only there are two row vectors of A on which all other row
vectors depend (linearly).
A similar theorem does not hold for matrices of separable
3. This fact is expressed by Theorem 8.
THEOREM 8. For every natural number k # 3, there are matrices
over R -. which are weakly *-decomposable into a product of
vector pairs, but not all of whose row vectors are linearly
dependent on a single k tuple of their row vectors.
REMARK. By Theorem 6, a matrix A OE R m-n has separable
rank two if and only if there exist two row vectors of
A-a(o), a(p) where o, p OE {1, -, m}-which allow for
a weak decomposition of A. In this case, an application
of Theorem 2 yields the following representation
of A:
A
u a v a
a f b g (35)
where
u a a i m
v a a i m
, . ,
, . ,
OE
R 111
Hence, in order to test an arbitrary matrix A OE R m-n
for weak decomposability into two vector pairs, it is
enough to compare A with (b[i] * a(i)) / (b[j] * a(j))
for all indices i, j OE {1, -, m}. Here B OE R m-n is computed
as follows:
b a a
is js
, . , (37)
ALGORITHM 2. Assume a matrix A OE R m-n needs to be decomposed
into a weak *-product of two vector pairs if
such a decomposition is possible. Considering the
preceding remarks, we are able to give a polynomial
time algorithm for solving this problem. For each
step, we include the number of operations involved in
square brackets.
Compute
2:
3:
and compare the
result with A. If C
m, then the algorithm stops yielding the following
result:
does not
have a weak decomposition into two vector pairs.
[At most 2 2
I K a f comparisons].
This algorithm involves at most a total number of m 2 (3n - 1
operations which amounts to order O(m 3 n).
EXAMPLE. Let us apply Algorithm 2 to the following matrix
A OE R 4-5 .
SUSSNER AND RITTER: DECOMPOSITION OF GRAY-SCALE MORPHOLOGICAL TEMPLATES USING THE RANK METHOD 9
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97 1:49 PM 9 11
F
I
We compute matrices B OE R 4-4 and C i OE R 4-5 for all
C b a
C b a
C b a
C b a
F
I
F
I
F
I
F
I
a f
a f
a f
a f
F
I
Comparing the matrices C i / C j with A for all
EXAMPLE. Let A OE R 9-9 be the following matrix, which constitutes
the maximum of a matrix in pyramid form
and a matrix in paraboloid form.
F
G G G G G G G G
I
Since matrices in paraboloid and in pyramid form are
separable, Algorithm 2 should yield a weak decomposition
of A in the form [u * a(o)] / [v * a(p)] for some vectors
u, v OE R 9 and some indices o, p OE {1, -, 9}. Indeed, if
B OE R 9-9 denotes the matrix computed by Algorithm 2,
where
a
F
G G G G G G G G
I
F
G G G G G G G G
I
a f c h
a f c h (44)
This weak decomposition of A can be used to further decompose
the square templates f(b[1] * a(1)) and f(b[3] *
templates. By Theorem 6, the templates
are decomposable into 3 - 3
templates if and only if the column templates f(b[1]) and
are decomposable into 3 - 1 templates and the row
templates f(a[1]) and f(a[3]) are decomposable into 1 - 3
templates. It is fairly easy to choose 3 - 1 templates
R
R
4, such
that
s 4 . For more complicated examples, we recommend using
one of the integer programming approaches suggested in
[17], [18], [19]. See Fig. 6 and Fig. 7. Hence, we obtain a representation
of f(b[1] * a(1)) in the form of (45).
By rearranging the templates r and s i for 4, we can
achieve a decomposition of f(b[1] * a(1)) into four 3 - 3
templates
Fig. 6. Templates of size 3 - 1 and size 1 - 3 providing a decomposition
of the template f(b[1] * a(1)).
Fig. 7. The 3 - 3 templates, providing a decomposition of the template
In a similar fashion, we are able to decompose the tem-
J:\PRODUCTION\TPAMI\2-INPROD\104798\104798_1.DOC regularpaper97.dot AG 19,968 04/24/97
plate f(b[3] * a(3)) into four 3 - 3 templates.
REMARK. The methods for decomposing rectangular morphological
templates presented in this paper can be
easily generalized to include arbitrary invariant morphological
templates which correspond to matrices
over R -.
6 CONCLUSIONS
We introduced the new theory of the separable matrix rank
within minimax algebra, which we compared to the theory
of matrix rank provided by Cuninghame-Green. The definition
of the separable rank of a matrix leads to the concept
of the rank of a rectangular morphological template, a notion
which has significance for the problem of morphological
template decomposition.
Using this terminology, the class of separable templates
represents the class of templates of rank one. A separable
templates can be strongly decomposed into a product of a
column template and a row template. Generalizing this
decomposition of separable templates, we developed a
polynomial time algorithm for the weak decomposition of a
rectangular template of rank two into horizontal and vertical
strip templates. We are currently working on an improved
version of this algorithm.
In an upcoming paper, we will show that determining
the rank of an arbitrary rectangular template is an NP complete
problem, and we will discuss the consequences for
morphological template decomposition problems in gen-
eral. Moreover, we will present a heuristic algorithm for
solving the rank problem and for finding an optimal weak
decomposition into strip templates.
ACKNOWLEDGMENT
This research was partially supported by U.S. Air Force
Contract F08635 89 C-0134.
--R
"Biomedical Image Processing,"
"Parallel 2-D Convolution on a Mesh Connected Array Processor,"
"Morphological Structuring Element Decomposition,"
"Decomposition of Convex Polygonal Morphological Structuring Elements Into Neighborhood Subsets,"
"Optimal Decomposition of Convex Morphological Structuring Elements for 4-Connected Parallel Array Processors,"
available via anonymous ftp from ftp.
"Recent Developments in
"Necessary and Sufficient Conditions for the Existence of Local Matrix Decompositions,"
"Classification of Lattice Transformations in Image Processing,"
"The P-Product and Its Applications in Signal Processing,"
"Decomposition Methods for Convolution Operators,"
"Morphological Template Decomposition With Max- Polynomials,"
Maxpolynomials and Morphological Template Decomposition
"Nonlinear Matrix Decompositions and an Application to Parallel Processing,"
"Decomposition Techniques for Gray-Scale Morphological Templates,"
"Local Decomposition of Gray-Scale Morphological Templates,"
"Global Optimization Problems in Computer Vision,"
"Decomposition of Separable and Symmetric Convex Templates,"
"Separable Decompositions and Approximations for Gray-Scale Morphological Templates,"
"Some Algorithms for Approximating Convolu- tions,"
"Proofs of Decomposition Results of Gray-Scale Morphological Templates Using the Rank Method,"
"Heterogeneous Algebras,"
Lecture Notes in Economics and Mathematical Systems
--TR
--CTR
A. Engbers , R. Van Den Boomgaard , A. W. M. Smeulders, Decomposition of Separable Concave Structuring Functions, Journal of Mathematical Imaging and Vision, v.15 n.3, p.181-195, November 2001
Ronaldo Fumio Hashimoto , Junior Barrera , Carlos Eduardo Ferreira, A Combinatorial Optimization Technique for the Sequential Decomposition of Erosions and Dilations, Journal of Mathematical Imaging and Vision, v.13 n.1, p.17-33, August 2000 | structuring element;morphology;morphological template;template rank;convolution;template decomposition |
263203 | The Matrix Sign Function Method and the Computation of Invariant Subspaces. | A perturbation analysis shows that if a numerically stable procedure is used to compute the matrix sign function, then it is competitive with conventional methods for computing invariant subspaces. Stability analysis of the Newton iteration improves an earlier result of Byers and confirms that ill-conditioned iterates may cause numerical instability. Numerical examples demonstrate the theoretical results. | Introduction
. If A 2 R n\Thetan has no eigenvalue on the imaginary axis, then the
matrix sign function sign(A) may be defined as
Z
(1)
where fl is any simple closed curve in the complex plane enclosing all eigenvalues of A
with positive real part. The sign function is used to compute eigenvalues and invariant
subspaces [2, 4, 6, 13, 14] and to solve Riccati and Sylvester equations [9, 15, 16, 28]. The
matrix sign function is attractive for machine computation, because it can be efficiently
evaluated by relatively simple numerical methods. Some of these are surveyed in [28].
It is particularly attractive for large dense problems to be solved on computers with
advanced architectures [2, 11, 16, 33].
Beavers and Denman use the following equivalent definition [6, 13]. Let
be the Jordan canonical decomposition of a matrix A having no eigenvalues
on the imaginary axis. Let the diagonal part of J be given by the matrix
then
(A) be the invariant subspace of A corresponding to eigenvalues
with positive real part, let be the invariant subspace of A corresponding
to eigenvalues with negative real part, let be the skew projection onto
parallel to be the skew projection onto parallel to
. Using the same contour fl as in (1), the projection P + has the resolvent integral
representation [23, Page 67] [2]
Z
(2)
To appear in SIAM Journal on Matrix Analysis and Its Applications.
y University of Kansas, Dept. of Mathematics, Lawrence, Kansas 66045, USA. Partial support was
received from National Science Foundation grants INT-8922444 and CCR-9404425 and University of
Kansas GRF Allocation 3514-20-0038.
z TU-Chemnitz-Zwickau, Fak. f. Mathematik, D-09107 Chemnitz, FRG. Partial support was received
from Deutsche Forschungsgemeinschaft, Projekt La 767/3-2.
It follows from (1) and (2) that
The matrix sign function was introduced using definition (1) by Roberts in a 1971
technical report [34] which was not published until 1980 [35]. Kato [23, Page 67] reports
that the resolvent integral (2) goes back to 1946 [12] and 1949 [21, 22].
There is some concern about the numerical stability of numerical methods based
upon the matrix sign function [2, 8, 19]. In this paper, we demonstrate that evaluating
the matrix sign function is a more ill-conditioned computational problem than the
problem of finding bases of the invariant subspaces V
Section 3. Nevertheless, we also give perturbation and error analyses, which show
that (at least for Newton's method for the computation of the matrix sign function
[8, 9]) in most circumstances the accuracy is competitive with conventional methods
for computing invariant subspaces. Our analysis improves some of the perturbation
bounds in [2, 8, 18, 24].
In Section 2 we establish some notation and clarify the relationship between the
matrix sign function and the Schur decomposition. The next two sections give a perturbation
analysis of the matrix sign function and its invariant subspaces. Section 5 gives
a posteriori bounds on the forward and backward error associated with a corrupted
value of sign(S). Section 6 is a stability analysis of the Newton iteration.
Throughout the paper, k \Delta k denotes the spectral norm, k \Delta k 1 the 1-norm (or column
sum norm), and k \Delta k F the Frobenius norm k \Delta k
. The set of eigenvalues
of a matrix A is denoted by -(A). The open left half plane is denoted by C \Gamma and the
open right half plane is denoted by C Borrowing some terminology from engineering,
we refer to the invariant subspace of a matrix A 2 R n\Thetan corresponding
to eigenvalues in C \Gamma as the stable invariant subspace and the subspace
corresponding to eigenvalues in C + as the unstable invariant subspace. We use
for the the skew projection onto V + parallel to V \Gamma and for the
skew projection onto
2. Relationship with the Schur Decomposition. Suppose that A has the
Schur form
k A 11 A 12
where Y is a solution of the Sylvester equation
then
and
The solution of (4) has the integral representation
Y =-i
Z
where fl is a closed contour containing all eigenvalues of A with positive real part
[29, 36]).
The stable invariant subspace of A is the range (or column space) of
is a QR factorization with column pivoting [1, 17], where Q and R are partitioned in
the obvious way, then the columns of Q 1 form an orthonormal basis of this subspace.
Here Q is orthogonal, \Pi is a permutation matrix, R is upper triangular, and R 1 is
nonsingular.
It is not difficult to use the singular value decomposition of Y to show that
s
It follows from (4) that
where sep is defined as in [17] by
3. The Effect of Backward Errors. In this section we discuss the sensitivity
of the matrix sign function subject to perturbations. For a perturbation matrix E, we
give first order estimates for E) in terms of submatrices and powers of kEk.
Based on Fr'echet derivatives, Kenney and Laub [24] presented a first order perturbation
theory for the matrix sign function via the solution of a Sylvester equation.
Mathias [30] derived an expression for the Fr'echet derivative using the Schur decom-
position. Kato's encyclopedic monograph [23] includes an extensive study of series
representations and of perturbation bounds for eigenprojections. In this section we
derive an expression for the Fr'echet derivative using integral formulas.
Let
oe min
where oe min is the smallest singular value of A \Gamma -iI. The quantity dA is the
distance from A to the nearest complex matrix with an eigenvalue on the imaginary
axis. Practical numerical techniques for calculating dA appear in [7, 10]. If
then E is too small to perturb an eigenvalue of A on or across the imaginary axis. It
follows that for kEk ! dA , sign(A+E), and the stable and unstable invariant subspaces
of are smooth functions of E.
Consider the relatively simple case in which A is block diagonal.
Lemma 3.1. Suppose A is block diagonal,
where . Partition the perturbation E 2 R n\Thetan conformally
with A as
where F 12 and F 21 satisfy the Sylvester equations
A 22 F
Proof. The hypothesis that implies that the eigenvalues of A 11
have negative real part and the eigenvalues of A 22 positive real part. In
the definition (1) choose the contour fl to enclose neither
In particular, for all complex numbers z lying on the contour
E) are nonsingular and
E) =-i
Z
Z
e
Z
Partitioning F conformally with E and A, then we have
F 11 =2-i
Z
Z
Z
F 22 =2-i
Z
As in (6), F 12 and F 21 are the solutions to the Sylvester equations (11) and (12) [29, 36].
The contour fl encloses no eigenvalues of A 11 , so
inside fl and F
We first prove that F 22 = 0 in the case that A 22 is diagonalizable, say A
where
Z
Each component of the above integral is of the form R
constant c. If then this is the integral of a residue free holomorphic function and
hence it vanishes. If j 6= k, then
Z
c
Z
c
The general case follows by taking limits of the diagonalizable case and using the dominated
convergence theorem.
The following theorem gives the general case.
Theorem 3.2. Let the Schur form of A be given by (3) and partition E conformally
as
and Y satisfies (4), then
e
~
~
satisfies the Sylvester equation
A 22
~
and ~
~
~
Proof. If
A 11 A 12
and
It follows from Lemma 3.1 that
I
~
on the left side and SQ H on the
right side of the above equation, we have
Y ~
It is easy to verify that
Y ~
\GammaI Y
I
~
\GammaI Y
I
The theorem follows from
\GammaI Y
If dA is small relative to kAk or kY k is large relative to kAk, then the hypothesis
that kSES may be restrictive. However, a small value of dA indicates that A is
very near a discontinuity in sign(A). A large value of kY k indicates that
is small and the stable invariant subspace is ill-conditioned [37, 39].
Of course Theorem 3.2 also gives first order perturbations for the projections
Corollary 3.3. Let the Schur form of A be given as in (3) and let E be as in
(13). Under the hypothesis of Theorem 3.2, the projections P
E)
are as in the statement of Theorem 3.2.
Taking norms in Theorem 3.2 gives the first order perturbation bounds of the next
corollary.
Corollary 3.4. Let the Schur form of A be given as in (3), E as in (13) and
then the first order perturbation of the matrix sign function
stated in Theorem 3.2 is bounded by
On first examination, Corollary 3.4 is discouraging. It suggests that calculating the
matrix sign function may be more ill-conditioned than finding bases of the stable and
unstable invariant subspace. If the matrix A whose Schur decomposition appears in
(3) is perturbed to A+ E, then the stable invariant subspace, range(Q 1 ), is perturbed
to 3.4 and the following
example show that sign(A+E) may indeed differ from sign(A) by a factor of ffi \Gamma3 which
may be much larger than kEk=ffi.
Example 1. Let
The matrix A is already in Schur form, so
have
E) =p
The difference is
Perturbing A to A+E does indeed perturb the matrix sign function by a factor of ffi \Gamma3 .
Of course there is no rounding error in Example 1, so the stable invariant subspace
of A+E is also the stable invariant subspace of sign(A+E) and, in particular, evaluating
exactly has done no more damage than perturbing A. The stable invariant
subspace of A is
"0
); the stable invariant subspace of A
E) is
For a general small perturbation matrix E, the angle between
is of order no larger than O(1=j) [17, 37, 39]. The matrix sign function (and the
projections may be significantly more ill-conditioned than the stable and
unstable invariant subspaces. Nevertheless, we argue in this paper that despite the
possible poor conditioning of the matrix sign function, the invariant subspaces are
usually preserved about as accurately as their native conditioning permits.
However, if the perturbation E is large enough to perturb an eigenvalue across or
on the imaginary axis, then the stable and unstable invariant subspaces may become
confused and cannot be extracted from E). This may occur even when the
invariant subspaces are well-conditioned, since the sign function is not defined in this
case. In geometric terms, in this situation A is within distance kEk of a matrix with
an eigenvalue with zero real part. This is a fundamental difficulty of any method that
identifies the two invariant subspaces by the signs of the real parts of the corresponding
eigenvalues of A.
4. Perturbation Theory for Invariant Subspaces of the Matrix Sign Func-
tion. In this section we discuss the accuracy of the computation of the stable invariant
subspace of A via the matrix sign function.
An easy first observation is that if the computed value of sign(A) is the exact value
of sign(A+E) for some perturbation matrix E, then the exact stable invariant subspace
of E) is also an invariant subspace of A+ E. Let A have Schur form (3) and
let E be a perturbation matrix partitioned conformally as in (13). Let Q 1 be the first
k columns of Q and Q 2 be the remaining columns. If
then A has stable invariant subspace has an invariant
subspace
where ffl 39]. The singular values of W are the tangents of the
canonical angles between In particular, the
canonical angles are at most of order O(1=
Unfortunately, in general, we cannot apply backward error analysis, i.e. we cannot
guarantee that the computed value of sign(A) is exactly the value of E) for
some perturbation E. Consider instead the effect of forward errors, let
where F represents the forward error in evaluating the matrix sign function. Let A have
Schur form (3). Partition Q H sign(A)Q and Q H FQ as
and
where Q is the unitary factor from the Schur decomposition of A (3) and Y is a solution
of (4).
Assume that
and let OE
Perturbing sign(A) to changes the invariant subspace
and by (8) and
s
A
- OE 21
Since obeys the bounds
- 4OE 21
Comparing (19) with (20) we see that the error bound (20) is no greater than twice
the error bound (19). Loosely speaking, a small relative error in sign(A) of size ffl might
perturb the stable invariant subspace by not much more than twice as much as a relative
error of size ffl in A can.
Therefore, the stable and unstable invariant subspaces of sign(A) may be less ill-conditioned
and are never significantly more ill-conditioned than the corresponding
invariant subspaces of A. There is no fundamental numerical instability in evaluating
the matrix sign function as a means of extracting invariant subspaces. However, numerical
methods used to evaluate the matrix sign function may or may not be numerically
unstable.
Example 1 continued. To illustrate the results, we give a comparison of our
perturbation bounds and the bounds given in [3] for both the matrix sign function and
the invariant subspaces in the case of Example 1. The distance to the nearest ill-posed
problem, i.e., is the smallest singular
value of leads to an overestimation of the error in [3]. Since dA - j \Gamma2 , the
bounds given in [3] are, respectively, O(j \Gamma4 ) for the matrix sign function and O(j \Gamma2 )
for the invariant subspaces.
5. A Posteriori Backward and Forward Error Bounds. A priori backward
and forward error bounds for evaluation of the matrix sign function remain elusive.
However, it is not difficult to derive a posteriori error bounds for both backward and
forward error.
We will need the following lemma to estimate the distance between a matrix S and
sign(S).
Lemma 5.1. If S 2 R n\Thetan has no eigenvalue with zero real part and k sign(S)S
Proof. Let S. The matrices F , S, and sign(S) commute, so
This implies that
Taking norms and using kFS
and the lemma follows.
It is clear from the proof of the Lemma 5.1 that
asymptotically correct as k sign(S) \Gamma Sk tends to zero. The bound in the lemma tends
to overestimate smaller values of k sign(S) \Gamma Sk by a factor of two.
Suppose that a numerical procedure for evaluating sign(A) applied to a matrix
A 2 R n\Thetan produces an approximation S 2 R n\Thetan . Consider the problem of finding
small norm solutions E 2 R n\Thetan and F 2 R n\Thetan to Of course, this
does not uniquely determine E and F . Common algorithms for evaluating sign(A) like
Newton's method for the square root of I guarantee that S is very nearly a square root
of I [19], i.e., S is a close approximation of sign(S). In the following theorem, we have
arbitrarily taken
Theorem 5.2. If k sign(S)S
perturbation matrices E and F satisfying
and
Proof. The matrices S +F and A+E commute, so an underdetermined, consistent
system of equations for E in terms of S, A, and
E(S
Let
\GammaI Y
I
be a Schur decomposition of sign(S) whose unitary factor is U and whose triangular
factor is on the right-hand-side of (24). Partition U H EU and U H AU conformally with
the right-hand-side of (24) as
and
A 11 A 12
A 21 A 22
Multiplying (23) on the left by U H and on the right by U and partitioning gives
Y A 21 \GammaA
One of the infinitely many solutions for E is given by
For this choice of E, we have
from which the theorem follows.
Lemma 5.1 and Theorem 5.2 agree well with intuition. In order to assure small
forward error, S must be a good approximate square root of I and, in addition, to assure
small backward error, sign(S) must nearly commute with the original data matrix A.
Newton's method for a root of I tends to do a good job of both [19]. (Note
that in general, Newton's method makes a poor algorithm to find a square root of a
matrix. The square root of I is a special case. See [19] for details.) In particular, the
hypothesis that k sign(S)S usually satisfied when the matrix sign function
is computed by the Newton algorithm.
When S - sign(S), the quantity k
S+S \Gamma1j
S+S \Gamma1j
k makes a good estimate
of the right-hand-side of (21). The bound (22) is easily computed or estimated from
the known values of A and S. However, these expressions are prone to subtractive
cancellation of significant digits.
The quantity kE 21 k is related by (18) to perturbations in the stable invariant sub-
space. The bounds (21) and (22) are a fortiori bounds on kE 21 k, but, as the (1; 2) block
of (25) suggests, they tend to be pessimistic overestimates of kE 21 k if kSk AE 1.
6. The Newton Iteration for the Computation of the Matrix Sign Func-
tion. There are several numerical methods for computing the matrix sign function
[2, 25]. Among the simplest and most commonly used is the Newton-Raphson method
for a root of starting with initial guess It is easily implemented
using matrix inversion subroutines from widely available, high quality linear algebra
packages like LAPACK [1, 2]. It has been extensively studied and many variations have
been suggested [2, 4, 5, 9, 18, 27, 25, 26, 28].
Algorithm 1. Newton Iteration (without scaling)
If A has no eigenvalues on the imaginary axis, then Algorithm 1 converges globally
and locally quadratically in a neighborhood of sign(A) [28]. Although the iteration
ultimately converges rapidly, initially convergence may be slow. However, the initial
convergence rate (and numerical stability) may be improved by scaling [2, 5, 9, 18, 27,
25, 26, 28]. A common choice is to scale X k 1=j det(X k )j (1=n) [9].
Theorem 3.2 shows that the first order perturbation of sign(A) may be as large as
is the relative uncertainty in A. (If there is no other uncertainty,
then ffl is at least as large as the round-off unit of the finite precision arithmetic.) Thus,
it is reasonable to stop the Newton iteration when
The ad hoc constant C is chosen in order to avoid extreme situations, e.g.,
This choice of C works well in our numerical experiments up to
shows furthermore that it is often advantageous to take an extra step of the iteration
after the stopping criterion is satisfied.
Example 2. This example demonstrates our stopping criterion. Algorithm 1 was
implemented in MATLAB 4.1 on an HP 715/33 workstation with floating point relative
accuracy We constructed a
is a random unitary matrix and R an upper triangular matrix with diagonal elements
parameter ff in the (k; k
position and zero everywhere else. We chose ff such that the norm k sign(A)k 1 varies
from small to large.
The typical behavior of the error is that it goes down and then
becomes stationary. This behavior is shown in the Figure 1 for the cases
and
Stopping criterion (26) is satisfied with 1000n at the 8-th step for
at the 7-th step for Taking one extra step would stop at the 9-th
step for and at the 8-th step for
In exact arithmetic, the stable and unstable invariant subspaces of the iterates
are the same as those of A. However, in finite precision arithmetic, rounding errors
perturb these subspaces. The numerical stability of the Newton iteration for computing
the stable invariant subspace has been analyzed in [8], we give an improved error bound
here.
Let X and X be, respectively, the computed k-th and 1)-st iterate of the
Newton iteration starting from
A 11 A 12
Fig. 1. 2.
the number of iterations
log10(||X_k-S||_1)
alpha=0
Suppose that X and X + have the form
A successful rounding error analysis must establish the relationship between E +and
. To do so we assume that some stable algorithm is applied to compute the inverse
in the Newton iteration. More precisely we assume that X
where
for some constant c. Note that this is a nontrivial assumption. Ordinarily, if Gaussian
elimination with partial pivoting is used to compute the inverse, the above error bound
can be shown to hold only for each column separately [8, 38]. The better inversion
algorithms applied to "typical" matrices satisfy this assumption [38, p. 151], but it is
difficult to determine if this is always the case [31, pp. 22-26], [20, p. 150].
Write EX and EZ as
The following theorem bounds indirectly the perturbation in the stable invariant
subspace.
Theorem 6.1. Let X, and EZ be as in (27), (28), (31), and (32). If2
where c is as in (29) and (30), then
Proof. We start with (28). In fact the relationship between E 21 and
from applying the explicit formula for the inverse of (X
~
c
~
c
c
c
Here,
~
Then
~
c
~
c
Using the Neumann lemma that if kBk ! 1, then
have
The following inequalities are established similarly.
22 k
22 k(kE
22 k
Inserting these inequalities in (33) we obtain
The bound in Theorem 6.1 is stronger than the bound of Byers in [8]. It follows
from (19) and Theorem 6.1, that if
then rounding errors in a step of Newton corrupt the stable invariant subspace by no
more than one might expect from the perturbation E 21 in (27). The term sep(X
22
is dominated by
sep
So to guarantee that rounding errors in the Newton iteration do little damage, the
factors in the bound of Theorem 6.1, kX \Gamma1
22 k and (kXk should be
small enough so that
Very roughly speaking, to have numerical stability throughout the algorithm, neither
22 k nor (kXk should be much larger than 1=
The following example from [4] demonstrates numerical instability that can be
traced to the violation of inequality (34).
Example 3. Let
A 11 =6 6 6 6 4
be a real matrix, and let A 22 = \GammaA T. Form
A 11 A 12
and
where the orthogonal matrix Q is chosen to be the unitary factor of the QR factorization
of a matrix with entries chosen randomly uniformly distributed in the interval [0; 1].
The parameter ff is taken as so that there are two eigenvalues of A
close to the imaginary axis from the left and right side. The entries of A 12 are chosen
randomly uniformly distributed in the interval [0; 1], too. The entries of E 21 are chosen
randomly uniformly distributed in the interval [0; eps], where \Gamma16 is the
machine precision.
Table
Evolution of kE
in Example 3.
A R
3 1.2093e-07 2.5501e-08 1.6948e+00
5 7.3034e-08 5.4025e-09 2.0000
6 7.3164e-08 2.7012e-09 2.0000
7 7.2020e-08 1.3506e-09 2.0000
8 7.1731e-08 6.7532e-10 2.0000
9 7.1866e-08 3.3766e-10 2.0000
13 7.1934e-08 2.1151e-11 2.0000
14 7.1938e-08 1.0646e-11 2.0000
19 7.1937e-08 1.7291e-12 2.0000
In this example,
shows the evolution of kE during the Newton iteration starting with
respectively, where E 21 is as in (27). The norm is taken to be
the 1-norm. Because kA \Gamma1k 1
is violated in the first step of the Newton iteration for the starting matrix A, which is
shown in the first column of the table. Newton's method never recovers from this.
It is remarkable, however, that Newton's method applied to R directly seems to
recover from the loss in accuracy in the first step. The second column shows that
although \Gamma7 at the first step, it is reduced by the factor
1=2 every step until it reaches 1:7 \Theta 10 \Gamma12 which is approximately
Observe that in this case the perturbation E 00
in EZ as in (28) is zero and
dominated by 1(kE
22
It is surprising to see that from the second
step on kX \Gamma1
22 k 1 is as small as eps, since A \Gamma1
11 and A \Gamma1
22 do not explicitly appear
in the term X \Gamma1
22 after the first step.
Our analysis suggests that the Newton iteration may be unstable when X k is ill-
conditioned. To overcome this difficulty the Newton iteration may be carried out with
a shift along the imaginary line. In this case we have to use complex arithmetic.
Algorithm 2. Newton Iteration With Shift
END
The real parameter fi is chosen such that oe min fiiI) is not small. For Example
2, when fi is taken to be 0:8, we have
Then by our analysis the computed invariant subspace is guaranteed to be accurate.
7. Conclusions. We have given a first order perturbation theory for the matrix
sign function and an error analysis for Newton's method to compute it. This analysis
suggests that computing the stable (or unstable) invariant subspace of a matrix with
the Newton iteration in most circumstances yields results as good as those obtained
from the Schur form.
8.
Acknowledgments
. The authors would like to express their thanks to N.
Higham for valuable comments on an earlier draft of the paper and Z. Bai and P.
Benner for helpful discussions.
--R
Design of a parallel nonsymmetric eigenroutine toolbox
Design of a parallel nonsymmetric eigenroutine toolbox
Inverse free parallel spectral divide and conquer algorithms for nonsymmetric eigenproblems.
Accelerated convergence of the matrix sign function method of solving Lyapunov
A computational method for eigenvalues and eigenvectors of a matrix with real eigenvalues.
A regularity result and a quadratically convergent algorithm for computing its L1 norm.
Numerical stability and instability in matrix sign function based algorithms.
Solving the algebraic Riccati equation with the matrix sign function.
A bisection method for measuring the distance of a stable matrix to the unstable matrices.
A systolic algorithm for Riccati and Lyapunov equations.
Perturbations des transformations autoadjointes dans l'espace de Hilbert.
The matrix sign function and computations in systems.
Spectral decomposition of a matrix using the generalized sign matrix.
A generalization of the matrix-sign-function solution for algebraic Riccati equations
Parallel algorithms for algebraic Riccati equations.
Matrix Computations.
Computing the polar decomposition - with applications
Newton's method for the matrix square root.
A survey of error analysis.
On the convergence of the perturbation method
On the convergence of the perturbation method
Perturbation Theory for Linear Operators.
Polar decompositions and matrix sign function condition estimates.
Rational iterative methods for the matrix sign function.
On scaling Newton's method for polar decompositions and the matrix sign function.
The matrix sign function.
The Theory of Matrices.
Condition estimation for the matrix sign function via the Schur decomposition.
Software for Roundoff Analysis of Matrix Algorithms.
Schur complement and statistics.
A parallel algorithm for the matrix sign function.
Linear model reduction and solution of algebraic Riccati equation by use of the sign function.
Linear model reduction and solution of the algebraic Riccati equation by use of the sign function.
and perturbation bounds for subspaces associated with certain eigenvalue problems.
Introduction to Matrix Computations.
Matrix Perturbation Theory.
--TR
--CTR
Daniel Kressner, Block algorithms for reordering standard and generalized Schur forms, ACM Transactions on Mathematical Software (TOMS), v.32 n.4, p.521-532, December 2006
Peter Benner , Maribel Castillo , Enrique S. Quintana-Ort , Vicente Hernndez, Parallel Partial Stabilizing Algorithms for Large Linear Control Systems, The Journal of Supercomputing, v.15 n.2, p.193-206, Feb.1.2000 | matrix sign function;invariant subspaces;perturbation theory |
263207 | An Analysis of Spectral Envelope Reduction via Quadratic Assignment Problems. | A new spectral algorithm for reordering a sparse symmetric matrix to reduce its envelope size was described in [Barnard, Pothen, and Simon, Numer. Linear Algebra Appl., 2 (1995), pp. 317--334]. The ordering is computed by associating a Laplacian matrix with the given matrix and then sorting the components of a specified eigenvector of the Laplacian. In this paper we provide an analysis of the spectral envelope reduction algorithm. We describe related 1- and 2-sum problems; the former is related to the envelope size, while the latter is related to an upper bound on the work in an envelope Cholesky factorization. We formulate these two problems as quadratic assignment problems and then study the 2-sum problem in more detail. We obtain lower bounds on the 2-sum by considering a relaxation of the problem and then show that the spectral ordering finds a permutation matrix closest to an orthogonal matrix attaining the lower bound. This provides a stronger justification of the spectral envelope reduction algorithm than previously known. The lower bound on the 2-sum is seen to be tight for reasonably "uniform" finite element meshes. We show that problems with bounded separator sizes also have bounded envelope parameters. | Introduction
. We provide a raison d'-etre for a novel spectral algorithm to reduce
the envelope of a sparse, symmetric matrix, described in a companion paper [2]. The algorithm
associates a discrete Laplacian matrix with the given symmetric matrix, and then
computes a reordering of the matrix by sorting the components of an eigenvector corresponding
to the smallest nonzero Laplacian eigenvalue. The results in [2] show that the spectral
algorithm can obtain significantly smaller envelope sizes compared to other currently used
algorithms. All previous envelope-reduction algorithms (known to us), such as the reverse
Cuthill-McKee (RCM) algorithm and variants [3, 16, 17, 26, 37], are combinatorial in nature,
employing breadth-first-search to compute the ordering. In contrast, the spectral algorithm
is an algebraic algorithm whose good envelope-reduction properties are somewhat intriguing
and poorly understood.
We describe problems related to envelope-reduction called the 1- and 2-sum problems,
and then formulate these latter problems as quadratic assignment problems (QAPs). We
show that the QAP formulation of the 2-sum enables us to obtain lower bounds on the 2-sum
(and related envelope parameters) based on the Laplacian eigenvalues. The lower bounds
seem to be quite tight for finite element problems when the mesh points are nearly all of the
same degree, and the geometries are simple. Further, a closest permutation matrix to an
orthogonal matrix that attains the lower bound is obtained, to within a linear approxima-
tion, by sorting the second Laplacian eigenvector components in monotonically increasing or
decreasing order. This justifies the spectral envelope-reducing algorithm more strongly than
earlier results.
Although initially envelope-reducing orderings were developed for use in envelope schemes
for sparse matrix factorization, these orderings have been used in the past few years in several
other applications. The RCM ordering has been found to be an effective pre-ordering
in computing incomplete factorization preconditioners for preconditioned conjugate-gradient
methods [4, 6]. Envelope-reducing orderings have been used in frontal methods for sparse
matrix factorization [7].
The wider applicability of envelope-reducing orderings prompts us to take a fresh look
at the reordering algorithms currently available, and to develop new ordering algorithms.
Spectral envelope-reduction algorithms seem to be attractive in this context, since they
(i) compare favorably with existing algorithms in terms of the quality of the orderings [2],
(ii) extend easily to problems with weights, e.g., finite element meshes arising from discretizations
of anisotropic problems, and
(iii) are fairly easily parallelizable.
Spectral algorithms are more expensive than the other algorithms currently available. But
since the envelope-reduction problem requires only one eigenvector computation (to low pre-
cision), we believe the costs are not impractically high in computation-intensive applications,
e.g., frontal methods for factorization. In contexts where many problems having the same
structure must be solved, a substantial investment in finding a good ordering might be justi-
fied, since the cost can be amortized over many solutions. Improved algorithms that reduce
the costs are being designed as well [25].
We focus primarily on the class of finite element meshes arising from discretizations of
partial differential equations. Our goals in this project are to develop efficient software implementing
our algorithms, and to prove results about the quality of the orderings generated.
The projection approach for obtaining lower bounds of a QAP is due to Hadley, Rendl,
and Wolkowicz [19], and this approach has been applied to the graph partitioning problem
by the latter two authors [35]. In earlier work a spectral approach for the graph (matrix)
partitioning problem has been employed to compute a spectral nested dissection ordering
for sparse matrix factorization, for partitioning computations on finite element meshes on a
distributed-memory multiprocessor [21, 33, 34, 36], and for load-balancing parallel computations
[22]. The spectral approach has also been used to find a pseudo-peripheral node [18].
Juvan and Mohar [23, 24] have provided a theoretical study of the spectral algorithm for
reducing p-sums, where et al. [20] obtain spectral lower
bounds on the bandwidth. A survey of some of these earlier results may be found in [31].
Paulino et al. [32] have also considered the use of spectral envelope-reduction for finite element
problems.
The following is an outline of the rest of this paper. In Section 2 we describe various
parameters of a matrix associated with its envelope, introduce the envelope size and envelope
work minimization problems, and the related 1- and 2-sum problems. We prove that bounds
on the minimum 1-sum yield bounds on the minimum envelope size, and similarly, bounds
on the minimum 2-sum yield bounds on the work in an envelope Cholesky factorization.
We also show in this section that minimizing the 2-sum is NP-complete. We compute lower
bounds for the envelope parameters of a sparse symmetric matrix in terms of the eigenvalues
of the Laplacian matrix in Section 3. The popular RCM ordering is obtained by reversing
the Cuthill-McKee (CM) ordering; the RCM ordering can never have a larger envelope size
and work than the CM ordering, and is usually significantly better. We prove that reversing
an ordering can improve or impair the envelope size by at most a factor \Delta, and the envelope
work by at most \Delta 2 , where \Delta is the maximum degree of a vertex in the adjacency graph.
In Section 4, we formulate the 2- and 1-sum problems as quadratic assignment problems.
We obtain lower and upper bounds for the 2-sum problem in terms of the eigenvalues of the
Laplacian matrix in Section 5 by means of a projection approach that relaxes a permutation
matrix to an orthogonal matrix with row and column sums equal to one. We justify the
spectral envelope-reduction algorithm in Section 6 by proving that a closest permutation
matrix to an orthogonal matrix attaining the lower bound for the 2-sum is obtained, to
within a linear approximation of the problem, by permuting the second Laplacian eigenvector
in monotonically increasing or decreasing order. In Section 7 we show that graphs with
small separators have small envelope parameters as well, by considering a modified nested
dissection ordering. We present computational results in Section 8 to illustrate that the
2-sums obtained by the spectral reordering algorithm can be close to optimal for many finite
element meshes. Section 9 contains our concluding remarks. The Appendix contains some
lower bounds for the more general p-sum problem, where 1
2. A menagerie of envelope problems.
2.1. The envelope of a matrix. Let A be an n \Theta n symmetric matrix with elements
whose diagonal elements are nonzero. Various parameters of the matrix A associated
with its envelope are defined below.
We denote the column indices of the nonzeros in the lower triangular part of the ith row
by
For the ith row of A we define
Here is the column index of the first nonzero in the ith row of A (by our assumption
of nonzero diagonals, 1 - f i - i), and the parameter r i (A) is the row-width of the ith row
of A. The bandwidth of A is the maximum row-width
The envelope of A is the set of index pairs
For each row, the column indices lie in an interval beginning with the column index of the
first nonzero element and ending with (but not including) the index of the diagonal nonzero
element.
We denote the size of the envelope by
(which includes the diagonal elements) is called the profile of A [7].) The work in the
Cholesky factorization of A that employs an envelope storage scheme is bounded from above
by
This bound is tight [29] when an ordering satisfies (1) f
between 1 and n, and
A 3 \Theta 3 7-point grid and the nonzero structure of the corresponding matrix A are shown
in
Figure
2.1. A ' ffl ' indicates a nonzero element, and a ` \Lambda ' indicates a zero element
that belongs to the lower triangle of the envelope in the matrix. The row-widths given in
Table
2.1 are easily verified from the structure of the matrix. The envelope size is obtained
by summing the row-widths, and is equal to 24. (Column-widths c i are defined later in this
section.)
The values of these parameters strongly depend on the choice of an ordering of the rows
and columns. Hence we consider how these parameters vary over symmetric permutations
of a matrix A, where P is a permutation matrix. We define Esize min (A), the minimum
envelope size of A, to be the minimum envelope size among all permutations P T AP of A.
The quantities Wbound min (A) and bw min (A) are defined in similar fashion. Minimizing the
envelope size and the bandwidth of a matrix are NP-complete problems [28], and minimizing
Fig. 2.1. An ordering of 7-point grid and the corresponding matrix. The lower triangle of the envelope
is indicated by marking zeros within it by asterisks.
Table
Row-widths and column-widths of the matrix in Figure 1.
the work bound is likely to be intractable as well. So one has to settle for heuristic orderings
to reduce these quantities.
It is helpful to consider a "column-oriented" expression for the envelope size for obtaining
a lower bound on this quantity in Section 3. The width of a column j of A is the number of
row indices in the jth column of the envelope of A. In other words,
(This is also called the jth front-width.) It is then easily seen that the envelope size is
The work in an envelope factorization scheme is given by
where we have ignored the linear term in c j . The column-widths of the matrix in Figure 2.1
are given in Table 2.1. These concepts and their inter-relationships are described by Liu and
Sherman [29], and are also discussed in the books [5, 15].
The envelope parameters can also be defined with respect to the adjacency graph
(V; E) of A. Denote In terms of the graph G and an ordering ff of
its vertices, we can define
Hence we can write the envelope size and work associated with an ordering ff as
Esize(G;
Wbound(G;
The goal is to choose a vertex ordering ng to minimize one of the parameters
described above. We denote by Esize min (G) (Wbound min (G)) the minimum value of
Esize(G; ff) (Wbound(G; ff)) over all orderings ff. The reader can compute the envelope size
of the numbered graph in Figure 2.1, using the definition given in this paragraph, to verify
that
The jth front-width has an especially nice interpretation if we consider the adjacency
graph E) of A. Let the vertex corresponding to a column j of A be numbered v j so
that g. Denote
a subset of vertices X. Then c
To illustrate the dependence of the envelope size on the ordering, we include in Figure 2.2
an ordering that leads to a smaller envelope size for the 7-point grid. Again, a ' ffl ' indicates
a nonzero element, and a ' ' indicates a zero element that belongs to the lower triangle of
the envelope in the matrix. This ordering by 'diagonals' yields the optimal envelope size for
the 7-point grid [27].
2.2. 1- and 2-sum problems. It will be helpful to consider quantities related to the
envelope size and envelope work, the 1-sum and the 2-sum.
For real 1 we define the p-sum to be
Minimizing the 1-sum is the optimal linear arrangement problem, and the limiting
case corresponds to the minimum bandwidth problem; both these are well-known
NP-complete problems [13]. We show in the Section 2.3 that minimizing the 2-sum is NP-complete
as well.
We write the envelope size and 1-sum, and the envelope work and the 2-sum, in a way
that shows their relationships:
Fig. 2.2. Another ordering of a 7-point grid and the corresponding matrix. Again the lower triangle of
the envelope is indicated by marking the zeros within it by asterisks.
The parameters oe
are the minimum values of these parameters over
all symmetric permutations P T AP of A.
We now consider the relationships between bounds on the envelope size and the 1-sum,
and between the upper bound on the envelope work and the 2-sum. Let \Delta denote the
maximum number of offdiagonal nonzeros in a row of A. (This is the maximum vertex
degree in the adjacency graph of A.)
Theorem 2.1. The minimum values of the envelope size, envelope work in the Cholesky
factorization, 1-sum, and 2-sum of a symmetric matrix A are related by the following inequalities
\DeltaEsize min (A);
Proof. We begin by proving (2.8). Our strategy will be to first prove the inequalities
and then to obtain the required result by considering two different permutations of A.
The bound Wbound(A) - oe 2
is immediate from equations (2.5) and (2.6). If the
inner sum in the latter equation is bounded from above by
then we get \DeltaWbound(A) as an upper bound on the 2-sum.
Now let X 1 be a permutation matrix such that f
Wbound min (A). Then we have
Further, let X 2 be a permutation matrix such that f
Again, we have
We obtain the result by putting the last two inequalities together.
We omit the proof of (2.7) since it can be obtained by a similar argument, and proceed
to prove (2.9). The first inequality oe holds since the p-norm of any real vector
is a decreasing function of p. The second inequality is also standard, since it bounds the
1-norm of a vector by means of its 2-norm. This result was obtained earlier by Juvan and
Mohar [24]; we include its proof for completeness. Applying the Cauchy-Schwarz inequality
to oe 2
We obtain the result by considering two orderings that achieve the minimum 1- and 2-sums.2.3. Complexity of the 2-sum problem. We proceed to show that minimizing the
2-sum is NP-complete. In Section 8 we show that the spectral algorithm computes a 2-sum
within a factor of two for the finite element problems in our test collection. This proof shows
that despite the near-optimal solutions obtained by the spectral algorithm on this test set,
it is unlikely that a polynomial time algorithm can be designed for computing the minimum
2-sum.
Readers who are willing to accept the complexity of this problem without proof should
skip this section; we recommend that everyone do so on a first reading.
Given a graph E) on n vertices, MINTWOSUM is the problem of deciding if
there exists a numbering of its vertices ng such that
k, for a given positive integer k. This is the decision version of the problem of minimizing
the 2-sum of G.
Theorem 2.2. MINTWOSUM is NP-complete.
Remark. This proof follows the framework for the NP-completeness of the 1-sum problem
in Even [8] (Section 10.7); but the details are substantially different.
Proof. The theorem will follow if we show that MAXTWOSUM, the problem of deciding
whether a graph G 0 on n vertices has a vertex numbering with 2-sum greater than or equal to
a given positive integer k 0 , is NP-complete. For, the 2-sum of G 0 under some ordering is at
least k 0 if and only if the 2-sum of the complement of G 0 under the same ordering is at most
is the 2-sum of the complete
graph.
We show that MAXTWOSUM is NP-complete by a reduction from MAXCUT, the
problem of deciding whether a given graph E) has a partition of its vertices into
two sets fS; V n Sg such that jffi(S; V n S)j, the number of edges joining S and V n S, is
at least a given positive integer k. From the graph G we construct a graph G
E) by adding n 4 isolated vertices to V and no edges to E. We claim
that G has a cut of size at least k if and only if G 0 has a 2-sum at least k
If G has a cut (S; V n S) of size at least k, define an ordering ff 0 of G 0 by interposing
the n 4 isolated vertices between S and V n S: number the vertices in S first, the isolated
vertices next, and the vertices in V n S last, where the ordering among the vertices in each
set S and V n S is arbitrary. Every edge belonging to the cut contributes at least n 8 to the
2-sum, and hence its value is at least k \Delta n 8 .
The converse is a little more involved.
Suppose that G 0 has an ordering ff with 2-sum greater than
or equal to k \Delta n 8 . The ordering ff 0 of G 0 induces a natural ordering ng of G,
if we ignore the isolated vertices and maintain the relative ordering of the vertices in V . For
each ig. Then each pair (S
a cut in G. Further, each such cut in G induces a cut (S 0
in the larger graph G 0 as
follows: The vertex set S 0
i is formed by augmenting S i with the isolated vertices numbered
lower than the highest numbered (non-isolated) vertex in S i (with respect to the ordering
We now choose a cut (S maximizes the "1-sum over the cut edges"
from among the n cuts (S 0
By means of this cut and the ordering ff 0 , we define a
new ordering fi 0 by moving the isolated vertices in the ordered set S 0 to the highest numbers
in that set, and by moving the isolated vertices in V 0 n S 0 to the lowest numbers in that
set, and preserving the relative ordering of the other vertices. The effect is to interpose the
isolated vertices in "between" the two sets of the cut.
Claim. The 2-sum of the graph G 0 under the ordering fi 0 is greater than that under ff 0 .
To prove the claim, we examine what happens when an isolated vertex x belonging to
S 0 is moved to the higher end of that ordered set.
Define three sets A 0 , B 0 , C 0 as follows: The set A 0 (B 0 ) is the set of vertices in S 0
numbered lower (higher) than x in the ordering ff 0 , and C
the edges joining A 0 and those joining
A 0 and C 0 .
Denote the contribution, with respect to the ordering ff 0 , of an edge e k to the 1-sum
by a k , and that of an edge e l l . Then the change in the 2-sum due to moving x is
(b l
The third term on the right-hand-side is the contribution to the 1-sum made by the edges
in the cut while the fourth term is the contribution made by
the edges E 1 in the cut By the choice of the cut (S that the
difference is positive, and hence that the 2-sum has increased in the new ordering obtained
from ff 0 by moving the vertex x.
We now show that after moving the vertex x, continues to be a cut that
maximizes the 1-sum over the cut edges among all cuts (S 0
respect to the new
ordering. For this cut, the 1-sum over cut edges has increased by jE 2 j because the number of
each vertex in B has decreased by one in the new ordering. Among cuts with one set equal
to an ordered subset of A 0 , the 1-sum over cut edges can only decrease when x is moved,
since the set B 0 moves closer to A 0 , and C 0 does not move at all relative to A 0 . Now consider
cuts of the form
1 an ordered subset of B 0 , and B 0
. The
cut edges now join A 0 to B 0
. The edges joining A 0 to B 0
contribute a
smaller value to the 1-sum in the new ordering relative to ff 0 , while the edges joining A 0 to C 0
contribute the same to the 1-sum in both cuts A
under the
new ordering. The edges joining B 0
2 do not change their contribution to the 1-sum in
the new ordering. The edges that join B 0
1 and C 0 form a subset of the edges that join B 0 and
hence the contribution of the former to the 1-sum is no larger than the contribution
of the latter set in the new ordering. This shows that the cut continues to
have a 1-sum over the cut edges larger than or equal to that of any cut
Finally, any cut that includes A 0 , B 0 , and an ordered subset C 0
1 of C 0 can be shown by similar
reasoning to not have a larger 1-sum than (S
The reasoning in the previous paragraph permits us to move the isolated vertices in S 0
one by one to the higher end of that set without decreasing the 2-sum while simultaneously
preserving the condition that the cut (S has the maximum value of the 1-sum over
the cut edges. The argument that we can move the isolated vertices in V 0 nS 0 to the beginning
of that ordered set follows from symmetry since both the 2-sum and the 1-sum are unchanged
when we reverse an ordering. Hence by inducting over the number of isolated vertices moved,
the ordering fi 0 has a 2-sum at least as large as the ordering ff 0 . This completes the proof of
the claim.
The rest of the proof involves computing an upper bound on the 2-sum of the graph G 0
under the ordering fi 0 to show that since G 0 has 2-sum greater than k 0 , the graph G has a
cut of size at least k.
)j. The cut edges contribute at most ffi \Delta (n 4 to the upper
bound on the 2-sum; the uncut edges contribute at most the 2-sum of a complete graph on
vertices. The latter is p(n) Thus we have, keeping only leading terms,
The second term on the left hand side is less than 1 for n ? 2 since the number of cut edges
ffi is at most n 2 =2; the third term is less than one for all n. The sum of these two terms is
less than 1 for n ? 2. Hence we conclude that the graph G has a cut with at least k edges.
This completes the proof of the theorem. 2
3. Bounds for envelope size. In this section we present lower bounds for the minimum
envelope size and the minimum work involved in an envelope-Cholesky factorization in
terms of the second Laplacian eigenvalue. We will require some background on the Laplacian
matrix.
3.1. The Laplacian matrix. The Laplacian matrix Q(G) of a graph G is the n \Theta n
is the diagonal degree matrix and M is the adjacency matrix of G.
If G is the adjacency graph of a symmetric matrix A, then we could define the Laplacian
matrix Q directly from A:
Note that
The eigenvalues of Q(G) are the Laplacian eigenvalues of G, and we list them as - 1
corresponding to - k (Q) will be denoted by x k , and will
be called a kth eigenvector of Q. It is well-known that Q is a singular M-matrix, and hence
its eigenvalues are nonnegative. Thus - 1 0, and the corresponding eigenvector is any
nonzero constant vector c. If G is connected, then Q is irreducible, and then -
smallest nonzero eigenvalues and the corresponding eigenvectors have important properties
that make them useful in the solution of various partitioning and ordering problems. These
properties were first investigated by Fiedler [9, 10]; as discussed in Section 1, more recently
several authors have studied their application to such problems.
3.2. Laplacian bounds for envelope parameters. It will be helpful to work with
the "column-oriented" definition of the envelope size. Let the vertex corresponding to a
column j of A be numbered v j in the adjacency graph so that
g. Recall that the column width of a vertex v j is c and that the
envelope size of G (or A) is
Recall also that \Delta denotes the maximum degree of a vertex. Given a set of vertices S, we
denote by ffi(S) the set of edges with one endpoint in S and the other in V n S.
We make use of the following elementary result, where the lower bound is due to Alon
and Milman [1] and the upper bound is due to Juvan and Mohar [24].
Lemma 3.1. Let S ae V be a subset of the vertices of a graph G. Then
Theorem 3.2. The envelope size of a symmetric matrix A can be bounded in terms of
the eigenvalues of the associated Laplacian matrix as
Proof. From Lemma 3.1,
Now substituting the lower bound for jffi(V j )j, and summing
this latter expression over all j, we obtain the lower bound on the envelope size.
The upper bound is obtained by using the inequality c with the upper
bound in Lemma 3.1. 2
A lower bound on the work in an envelope-Cholesky factorization can be obtained from
the lower bound on the envelope size.
Theorem 3.3. A lower bound on the work in the envelope-Cholesky factorization of a
symmetric positive definite matrix A is
Proof. The proof follows from Equations 2.1 and 2.2, by an application of the Cauchy-Schwarz
inequality. We omit the details. 2
Cuthill and McKee [3] proposed one of the earliest ordering algorithms for reducing the
envelope size of a sparse matrix. George [14] discovered that reversing this ordering leads to a
significant reduction in envelope size and work. The envelope parameters obtained from the
(RCM) ordering are never larger than those obtained from CM [29].
The RCM ordering has become one of the most popular envelope size reducing orderings.
However, we do not know of any published quantitative results on the improvement that may
be expected by reversing an ordering, and here we present the first such result. For degree-bounded
finite element meshes, no asymptotic improvement is possible; the parameters are
improved only by a constant factor. Of course, in practice, a reduction by a constant factor
could be quite significant.
Theorem 3.4. Reversing the ordering of a sparse symmetric matrix A can change
(improve or impair) the envelope size by at most a factor \Delta, and the envelope work by at
most
Proof. Let v j denote the vertex in the adjacency graph corresponding to the jth column
of A (in the original ordering) so that the jth column width c
g. Let e
A denote the permuted matrix obtained by reversing the column and row
ordering of A. We have the inequality
(A), summing this inequality over j from one to n, we obtain
A). By symmetry, the inequality Esize( e
holds as well.
The inequality on the envelope work follows by a similar argument from the equation
4. Quadratic assignment formulation of 2- and 1-sum problems. We formulate
the 2- and 1-sum problems as quadratic assignment problems in this section.
4.1. The 2-sum problem. Let the vector
, and let ff be a
permutation vector, i.e., a vector whose components form a permutation of
may permutation matrix with elements
It is easily verified that the (ff(i); ff(j)) element of the permuted matrix X T AX is the element
a ij of the unpermuted matrix A. Let ij. We denote the set of all
permutation vectors with n components by S n .
We write the 2-sum as a quadratic form involving the Laplacian matrix Q.
a
The transformation from the second to the third line makes use of (3.1).
This quadratic form can be expressed as a quadratic assignment problem by substituting
min
There is also a trace formulation of the QAP in which the variables are the elements of
the permutation matrix X. We obtain this formulation by substituting Xp for ff. Thus
min
We may consider the last scalar expression as the trace of a 1 \Theta 1 matrix, and then use the
identity tr tr NM to rewrite the right-hand-side of the last displayed equation as
tr
tr
This is a quadratic assignment problem since it is a quadratic in the unknowns x ij , which
are the elements of the permutation matrix X. The fact that B is a rank-one matrix leads
to great simplifications and savings in the computation of good lower bounds for the 2-sum
problem.
4.2. The 1-sum problem. Let M be the adjacency matrix of a given symmetric
matrix A and let S denote a 'distance matrix' with elements both of order n.
Then
oe
Unlike the 2-sum, the matrices involved in the QAP formulation of the 1-sum are both
of rank n. Hence the bounds we obtain for this problem by this approach are considerably
more involved, and will not be considered here.
5. Eigenvalue bounds for the 2-sum problem.
5.1. Orthogonal bounds. A technique for obtaining lower (upper) bounds for the
QAP
min
permutation matrix;
is to relax the requirement that the minimum (maximum) be attained over the class of
permutation matrices. Let
n)
denote the normalized n-vector of
all ones. A matrix X of order n is a permutation matrix if and only if it satisfies the following
three constraints:
(5.
The first of these, the stochasticity constraint , expresses the fact that each row sum or
column sum of a permutation matrix is one; the second states that a permutation matrix
is orthogonal; and the third that its elements are non-negative. The simplest bounds for a
QAP are obtained when we relax both the stochasticity and non-negativity constraints, and
insist only that X be orthonormal. The following result is from [11]; see also [12].
Theorem 5.1. Let the eigenvalues of a matrix be ordered
Then, as X varies over the set of orthogonal matrices, the following upper and lower bounds
hold:
The Laplacian matrix Q has - 1
1). Hence the lower bound in the theorem above is zero,
and the upper bound is (1=6)- n (Q)n(n
5.2. Projection bounds. Stronger bounds can be obtained by a projection technique
described by Hadley, Rendl, and Wolkowicz [19]. The idea here is to satisfy the stochasticity
constraints in addition to the orthonormality constraints, and relax only the non-negativity
constraints. This technique involves projecting a permutation matrix X into a subspace
orthogonal to the stochasticity constraints (5.1) by means of an eigenprojection.
Let the n \Theta be an orthonormal basis for the orthogonal complement
of u. By the choice of V , it satisfies two properties: V T
is an
orthonormal matrix of order n.
Observe that
This suggests that we take
Note that with this choice, the stochasticity constraints are satisfied.
Furthermore, if X is an orthonormal matrix of order n satisfying
is orthonormal, and this implies that Y is an orthonormal matrix of order n \Gamma 1. Conversely,
if Y is orthonormal of order then the matrix X obtained by the construction above
is orthonormal of order n. The non-negativity constraint X - 0 becomes, from (5.4),
These facts will enable us to express the original QAP in terms of a
projected QAP in the matrix of variables Y .
To obtain the projected QAP, we substitute the representation of X from (5.4) into the
objective function tr QXBX T . Since by the construction of the Laplacian, terms of
the form Qu
tr
where we use the identity tr for an n \Theta k matrix M and a k \Theta n matrix
N . Again this term is zero since u T Hence the only nonzero term in the objective
function is
tr
where c
is a projection of a matrix M .
We have obtained the projected QAP in terms of the matrix Y of order
the constraint that X be a permutation matrix now imposes the constraints that Y is orthonormal
and that V Y V T - \Gammau u T . We obtain lower and upper bounds in terms of the
eigenvalues of the matrices b
by relaxing the non-negativity constraint again.
Theorem 5.2. The following upper and lower bounds hold for the 2-sum problem:
Proof. If we apply the orthogonal bounds to the projected QAP, we get
The vector u is the eigenvector of Q corresponding to the zero eigenvalue, and hence eigen-vectors
corresponding to higher Laplacian eigenvalues are orthogonal to it. Thus any such
eigenvector x j can be expressed as x . Substituting this last equation into the eigenvalue
equation pre-multiplying by V T , we obtain b
Hence
Q). Also, -
are zero. Hence it remains to compute the largest eigenvalue of b
B.
From the representation I
We get the result by substituting these eigenvalues into the bounds for the 2-sum. 2
For justifying the spectral algorithm for minimizing the 2-sum, we observe that the lower
bound is attained by the matrix
where R (S) is a matrix of eigenvectors of b
B), and the eigenvectors correspond to the
eigenvalues of b
B) in non-decreasing (non-increasing) order.
The result given above has been obtained by Juvan and Mohar [24] without using a QAP
formulation of the 2-sum. We have included this proof for two reasons: First, in the next
subsection, we show how the lower bound may be strengthened by diagonal perturbations
of the Laplacian. Second, in the following section, we consider the problem of finding a
permutation matrix closest to the orthogonal matrix attaining the lower bound.
5.3. Diagonal perturbations. The lower bound for the 2-sum can be further improved
by perturbing the Laplacian matrix Q by a diagonal matrix Diag(d), where d is an
n-vector, and then using an optimization routine to maximize the smallest eigenvalue of the
perturbed matrix.
Choosing the elements of d such that its elements sum to zero, i.e., u T simplifies
the bounds we obtain, and hence we make this assumption in this subsection. We begin by
denoting expressing
The second term can be written as a linear assignment problem (LAP) since one of the
matrices involved is diagonal. Let the permutation vector denote the
n-vector formed from the diagonal elements of B.
tr
We now proceed, as in the previous subsection, to obtain projected bounds for the
quadratic term, and thus for f(X). Note that
n )d since
since the elements of d sum to zero. We let
the row-sum of the elements of B.
With notation as in the previous subsection, we substitute in the
quadratic term in f(X). The first term tr Q(d)u u T Bu u
second and third terms are equal, and their sum can be transformed as follows:
Note that this term is linear in the projected variables Y , and we shall find it convenient to
express it in terms of X by the substitution X
since the second term is equal to tr u T d r(B) T u, which is zero by the choice of d.
Finally, the fourth term becomes tr b
and as before
Putting it all together, we obtain
Observe that the first term is quadratic in the projected variables Y , and the remaining terms
are linear in the original variables X. Our lower bound for the 2-sum shall be obtained by
minimizing the quadratic and linear terms separately.
We can simplify the linear assignment problem by noting that
sq(p), the vector
with ith component equal to i 2 . Hence the final expression for the linear assignment problem
is
tr d
The minimum value of this problem, denoted by L(d) (the minimum over the permutation
matrices X, for a given d), can be computed by sorting the components of d and
The eigenvalues of b
B can be computed as in the previous subsection. We may choose d
to maximize the lower bound. Thus this discussion leads to the following result.
Theorem 5.3. The minimum 2-sum of a symmetric matrix A can be bounded as
d
where the components of the vector d sum to zero. 2
6. Computing an approximate solution from the lower bound. Consider the
problem of finding a permutation matrix Z "closest" to an orthogonal matrix X 0 that attains
the lower bound in Theorem 5.2. We show in this section that sorting the second Laplacian
eigenvector components in non-increasing (also non-decreasing) order yields a permutation
matrix that solves a linear approximation to the problem. This justifies the spectral approach
for minimizing the 2-sum.
From (5.5), the orthogonal matrix X a matrix of
eigenvectors of b
B) corresponding to the eigenvalues of b
B) in increasing (decreasing)
order. We begin with a preliminary discussion of some properties of the matrix X 0 and the
eigenvectors of Q. For let the jth column of R be denoted by r j , and
similarly let s j denote the jth column of S. Then s c is a normalization
the vector s j is orthogonal to V T p, i.e.,
Recall from the previous section that a second Laplacian eigenvector x
Now we can formulate the "closest" permutation matrix problem more precisely. The
minimum 2-sum problem may be written as
min
Z
2:
We have chosen a positive shift ff to make the shifted matrix positive definite and hence
to obtain a weighted norm by making the square root nonsingular. It can be verified that
the shift has no effect on the minimizer since it adds only a constant term to the objective
function.
We substitute expand the 2-sum about X 0 to obtain
2:
The first term on the right-hand-side is a constant since X 0 is a given orthogonal matrix;
the third term is a quadratic in the difference hence we neglect it to obtain a
linear approximation. It follows that we can choose a permutation matrix Z close to X 0 to
approximately minimize the 2-sum by solving
min
Z
Z
Substituting for X 0 from (5.5) in this linear assignment problem and noting that
we find
min
Z
Z
Z
tr QV RS
The second term on the right-hand-side is a constant since
tr u u T BZ
Here we have substituted Z T We proceed to simplify the first term in (6.4),
which is
tr QV RS
From (6.1) we find that s j
hence only the first term in
the sum survives. Noting that s becomes
The third term in (6.4) can be simplified in like manner, and hence ignoring the constant
second term, this equation becomes
c(-
Z
Hence we are required to choose a permutation matrix Z to minimize tr x
. The solution to this problem is to choose Z to correspond to a permutation of
the components of x 2 in non-increasing order, since the components of the vector p are in
increasing order. Note that \Gammax 2 is also an eigenvector of the Laplacian matrix, and since the
positive or negative signs of the components are chosen arbitrarily, sorting the eigenvector
components into non-decreasing order also gives a permutation matrix Z closest, within a
linear approximation, to a different choice for the orthogonal matrix X 0 (see 5.5).
Similar techniques can be used to show that if one is interested in maximizing the 2-sum,
then a closest permutation matrix to the orthogonal matrix that attains the upper bound
in Theorem 5.2 is approximated by sorting the components of the Laplacian eigenvector x n
(corresponding to the largest eigenvalue - n (Q)) in non-decreasing (non-increasing) order.
7. Asymptotic behavior of envelope parameters. In this section, we first prove
that graphs with good separators have asymptotically small envelope parameters, and next
study the asymptotic behavior of the lower bounds on the envelope parameters as a function
of the problem size.
7.1. Upper bounds on envelope parameters. Let ff, fi, and fl be constants such
that (1=2) - ff; class of graphs G has n fl -
separators if every graph G on n ? n 0 vertices in G can be partitioned into three sets A, B,
S such that no vertex in A is adjacent to any vertex in B, and the number of vertices in the
sets are bounded by the relations jAj; jBj - ffn and jSj - fin fl . If n - n 0 , then we choose
the separator S to consist of the entire graph. If n ? n 0 , then by the choice of n 0 ,
and we separate the graph into two parts A and B by means of a separator S. The assumption
that fl is at least a half is not a restriction for the classes of graphs that we are interested
in here: Planar graphs have n 1=2 -separators, and overlap graphs [30] embedded in d - 2
dimensions, have n (d\Gamma1)=d -separators. The latter class includes "well-shaped" finite element
graphs in d dimensions, i.e., finite element graphs with elements of bounded aspect ratio.
Theorem 7.1. Let G be a class of graphs that has n fl -separators and maximum vertex
degree bounded by \Delta. The minimum envelope size Esize min (G) of any graph G 2 G on n
vertices is O(n 1+fl ).
Proof. If then we order the vertices of G arbitrarily. Otherwise, let a separator
separate G into the two sets A and B, where we choose the subset B to have no more
vertices than A. We consider a "modified nested dissection" ordering of G that orders the
vertices in A first, the vertices in S next, and the vertices in B last. (See the ordering in
Figure
2.1, where S corresponds to the set of vertices in the middle column.)
The contribution to the envelope E S made by the vertices in S is bounded by the product
of the maximum row-width of a vertex in S and the number of vertices in S. Thus
We also consider the contribution made by vertices in B that are adjacent to nodes in S,
as a consequence of numbering the nodes in S. There are at most \DeltajSj such vertices in B.
Since these vertices are not adjacent to any vertex in A, the contribution EB made by them
is
the number of vertices in the subset A (B). Adding the contributions
from the two sets of nodes in the previous paragraph, we obtain the recurrence relation
We claim that
for suitable constants C 1 and C 2 to be chosen later. We prove the claim by induction on n.
For the claim may be satisfied by choosing C 1 to be greater than or equal to
Now consider the case when n ? n 0 . Let the maximum in the recurrence relation (7.1)
be attained for an and n
have thus the inductive hypothesis can be applied to the subgraphs induced by
A and B. Hence we substitute the bound (7.2) into the recurrence relation (7.1) to obtain
log an
For the claim to be satisfied, this bound must be less than the right-hand-side of the inequality
(7.2). We prove this by considering the coefficients of each of the terms n 1+fl and
Consider the n 1+fl term first. It is easy to see that a 1+fl
positive. Furthermore, this expression attains its maximum when
a is equal to ff. Denote this maximum value by ffl j ff 1+fl 1. Equating the
coefficients of n 1+fl in the recurrence relation, if
then the first term in the claimed asymptotic bound on E(n) would be true. Both this
inequality and the condition on C 1 imposed by n 0 are satisfied if we choose
We simplify the coefficient of the n 2fl term a bit before proceeding to analyze it. We
have
a 2fl log an
- a 2fl log an log an -
log ff n j ' log ff n
log ff n:
In the transformations we have used the following facts: 1 \Gamma a - a, since a - 1=2; the
maximum of a 2fl and 2fl is greater than or equal to one, is
attained for a = ff; this maximum value ' is less than one. Hence for the claim to hold, we
require
This last inequality is satisfied if we choose
log ff \Gamma1
:A similar proof yields Wbound min which is an upper bound on the work
in an envelope-Cholesky factorization. Hence good separators imply small envelope size and
work. Although we have used a "modified nested dissection" ordering to prove asymptotic
upper bounds, we do not advocate the use of this ordering for envelope-reduction. Other
envelope-reducing algorithms considered in this paper are preferable, since they are faster
and yield smaller envelope parameters.
7.2. Asymptotic behavior of lower bounds. In this subsection we consider the
implications of the spectral lower bounds that we have obtained. We denote the eigenvalue
for the sake of brevity in this subsection. We use the asymptotic behavior of the
second eigenvalues together with the lower bounds we have obtained to predict the behavior
of envelope parameters. For the envelope size, we make use of Theorem 3.2; for the envelope
work, we employ Theorem 3.3.
The bounds on envelope parameters are tight for dense and random graphs (matri-
ces). For instance, the full matrix (the complete graph) has -
Esize Similarly, the bound on the envelope work Ework min
The predicted lower bound is within a factor of three of the envelope size. These bounds are
also asymptotically tight for random graphs where each possible edge is present in the graph
with a given constant probability p, since the second Laplacian eigenvalue satisfies [23]
More interesting are the implications of these bounds for degree-bounded finite element
meshes in two and three dimensions. We will employ the following result proved recently by
Spielman and Teng [38].
Theorem 7.2. The second Laplacian eigenvalue of an overlap graph embedded in d-
dimensions is bounded by O(n \Gamma2=d ). 2
problem separator - 2
size LB UB LB UB
d-dim. O(n 1\Gamma1=d
Table
Asymptotic upper and lower bounds on envelope size and work for an overlap graph in d dimensions.
Planar graphs are overlap graphs in 2 dimensions, and well-shaped meshes in 3 dimensions
are also overlap graphs with
Table
7.1 summarizes the asymptotic lower and upper bounds on the envelope parameters
for a well-shaped mesh embedded in d dimensions. The most useful values are
As before, the lower bound on the envelope size is from Theorem 3.2, while the lower
bound on the envelope work is from Theorem 3.3. The upper bound on the envelope size
follows from Theorem 7.1, and the upper bound on envelope work follows from the upper
bound on Wbound(A), discussed at the end of the proof of that theorem.
The lower bounds are obtained for problems where the upper bounds on the second
eigenvalue are asymptotically tight. This is reasonable for many problems, for instance model
problems in Partial Differential Equations. Note that the regular finite element mesh in a
discretization of Laplace's equation in two dimensions (Neumann boundary conditions) has
is the smallest diameter of an element (smallest mesh spacing
for a finite difference mesh). The regular three-dimensional mesh in the discretized Laplace's
equation with Neumann boundary conditions satisfies -
For planar problems, the lower bound on the envelope size is \Omega\Gamma n), while the upper
bound is O(n 1:5 ). For well-shaped three-dimensional meshes, these bounds
O(n 5=3 ). The lower bounds on the envelope work are weaker since they are obtained from
the corresponding bounds on the envelope size. Direct methods for solving sparse systems
have storage requirements bounded by O(n log n) and work bounded by O(n 1:5 ) for a two-dimensional
mesh; in well-shaped three dimensional meshes, these are O(n 4=3 ) and O(n 2 ).
These results suggest that when a two-dimensional mesh possesses a small second Laplacian
eigenvalue, envelope methods may be expected to work well. Similar conclusions should
hold for three-dimensional problems when the number of mesh-points along the third dimension
is small relative to the number in the other two dimensions, and for two-dimensional
surfaces embedded in three-dimensional space.
8. Computational results. We present computational results to verify how well the
spectral ordering reduces the 2-sum. We report results on two sets of problems.
The first set of problems, shown in Table 8.1, is obtained from John Richardson's (Think-
ing Machines Corporation) program for triangulating the sphere. The spectral lower bounds
reported are from Theorem 5.2. Gap is the ratio with numerator equal to the difference
between the 2-sum and the lower bound, and the denominator equal to the 2-sum. The
results show that the spectral reordering algorithm computes values within a few percent of
the optimal 2-sum, since the gap between the spectral 2-sum and the lower bound is within
that range.
1,026 3,072 4.17e-2 3.75e+6 4.05e+6 7.4
7.3
Table
2-sums from the spectral reordering algorithm and lower bounds for triangulations of the sphere.
Problem
BARTH 6,691 19,748 2.62e-3 6.54e+7 6.69e+7 2.2
Table
2-sums from the spectral reordering algorithm and lower bounds for some problems from the Boeing-
Harwell and NASA collections.
Table
8.2 contains the second set of problems, taken from the Boeing-Harwell and NASA
collections. Here the bounds are weaker than the bounds in Table 8.1. These problems have
two features that distinguish them from the sphere problems. Many of them have less regular
degree distributions-e.g., NASA1824 has maximum degree 41 and minimum degree 5. They
also represent more complex geometries. Nevertheless, these results imply that the spectral
2-sum is within a factor of two of the optimal value for these problems. These results are
somewhat surprising since we have shown that minimizing the 2-sum is NP-complete.
The gap between the computed 2-sums and the lower bounds could be further reduced
in two ways. First, a local reordering algorithm applied to the ordering computed by the
spectral algorithm might potentially decrease the 2-sum. Second, the lower bounds could be
improved by incorporating diagonal perturbations to the Laplacian.
9. Conclusions. The lower bounds on the 2-sums show that the spectral reordering
algorithm can yield nearly optimal values, in spite of the fact that minimizing the 2-sum is
an NP-complete problem. To the best of our knowledge, these are the first results providing
reasonable bounds on the quality of the orderings generated by a reordering algorithm for
minimizing envelope-related parameters. Earlier work had not addressed the issue of the
quality of the orderings generated by the algorithms. Unfortunately the tight bounds on the
2-sum do not lead to tight bounds on the envelope parameters. However, we have shown
that problems with bounded separator sizes have bounded envelope parameters and have
obtained asymptotic lower and upper bounds on these parameters for finite element meshes.
Our analysis further shows that the spectral orderings attempt to minimize the 2-sum
rather than the envelope parameters. Hence a reordering algorithm could be used in a post-processing
step to improve the envelope and wavefront parameters from a spectral ordering.
A combinatorial reordering algorithm called the Sloan algorithm has been recently used to
reduce envelope size and front-widths by Kumfert and Pothen [25]. Currently this algorithm
computes the lowest values of the envelope parameters on a collection of finite element
meshes.
Acknowledgments
. Professor Stan Eisenstat (Yale University) carefully read two drafts
of this paper and pointed out several errors. Every author should be so blessed! Thanks,
Stan.
--R
A spectral algorithm for envelope reduction of sparse matrices
Reducing the bandwidth of sparse symmetric matrices
Ordering methods for preconditioned conjugate gradients methods applied to unstructured grid problems
Direct Methods for Sparse Matrices
The effect of ordering on preconditioned conjugate gradients
The use of profile reduction algorithms with a frontal code
Graph Algorithms
Algebraic connectivity of graphs
in Surveys in Combinatorial Optimization
Matrix inequalities in the L-owner orderings
Computers and Intractability: A Guide to the Theory of NP- Completeness
Computer implementation of the finite element method
Computer Solution of Large Sparse Positive Definite Systems
Algorithm 509: A hybrid profile reduction algorithm
A new algorithm for finding a pseudoperipheral node in a graph
A new lower bound via projection for the quadratic assignment problem
A spectral approach to bandwidth and separator problems in graphs.
An improved spectral graph partitioning algorithm for mapping parallel computations
Laplace eigenvalues and bandwidth-type invariants of graphs
A refined spectral algorithm to reduce the envelope and wavefront of sparse matrices.
Implementations of the Gibbs-Poole-Stockmeyer and Gibbs-King algorithms
Minimum profile of grid networks in structure analysis.
Comparative analysis of the Cuthill-Mckee and the reverse Cuthill-Mckee ordering algorithms for sparse matrices
in Graph Theory and Sparse Matrix Computation
Eigenvalues in combinatorial optimization.
Node and element resequencing using the Laplacian of a finite element graph
Partitioning sparse matrices with eigenvectors of graphs
A projection technique for partitioning the nodes of a graph
Partitioning of unstructured problems for parallel processing
An algorithm for profile and wavefront reduction of sparse matrices
--TR
--CTR
Desmond J. Higham, Unravelling small world networks, Journal of Computational and Applied Mathematics, v.158 n.1, p.61-74, 1 September
Shashi Shekhar , Chang-Tien Lu , Sanjay Chawla , Sivakumar Ravada, Efficient Join-Index-Based Spatial-Join Processing: A Clustering Approach, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1400-1421, November 2002 | sparse matrices;quadratic assignment problems;reordering algorithms;2-sum problem;envelope reduction;eigenvalues of graphs;1-sum problem;laplacian matrices |
263210 | Perturbation Analyses for the QR Factorization. | This paper gives perturbation analyses for $Q_1$ and $R$ in the QR factorization $A=Q_1R$, $Q_1^TQ_1=I$ for a given real $m\times n$ matrix $A$ of rank $n$ and general perturbations in $A$ which are sufficiently small in norm. The analyses more accurately reflect the sensitivity of the problem than previous such results. The condition numbers here are altered by any column pivoting used in $AP=Q_1R$, and the condition number for $R$ is bounded for a fixed $n$ when the standard column pivoting strategy is used. This strategy also tends to improve the condition of $Q_1$, so the computed $Q_1$ and $R$ will probably both have greatest accuracy when we use the standard column pivoting strategy.First-order perturbation analyses are given for both $Q_1$ and $R$. It is seen that the analysis for $R$ may be approached in two ways---a detailed "matrix--vector equation" analysis which provides a tight bound and corresponding condition number, which unfortunately is costly to compute and not very intuitive, and a simpler "matrix equation" analysis which provides results that are usually weaker but easier to interpret and which allows the efficient computation of satisfactory estimates for the actual condition number. These approaches are powerful general tools and appear to be applicable to the perturbation analysis of any matrix factorization. | Introduction
. The QR factorization is an important tool in matrix computations
(see for example [4]): given an m \Theta n real matrix A with full column rank,
there exists a unique m \Theta n real matrix Q 1 with orthonormal columns, and a unique
nonsingular upper triangular n \Theta n real matrix R with positive diagonal entries such
that
The matrix Q 1 is referred to as the orthogonal factor, and R the triangular factor.
Suppose tG has the unique QR factorization
differentiate R(t) T with respect to t and set
which with given A and G is a linear equation for the upper triangular matrix -
R(0).
determines the sensitivity of R(t) at and so the core of any perturbation
analysis for the QR factorization effectively involves the use of (1.1) to determine
bound -
R(0). There are essentially two main ways of approaching this problem. We
now discuss these, together with some very recent history.
Xiao-Wen Chang [1, 2, 3] was the first to realize that most of the published results
on the sensitivity of factorizations, such as LU, Cholesky, and QR, were extremely
School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7,
(chang@cs.mcgill.ca), (chris@cs.mcgill.ca). The research of the first two authors was supported by
NSERC of Canada Grant OGP0009236.
y Department of Computer Science and Institute for Advanced Computer Studies, University of
Maryland, College Park, MD 20742, U.S.A., (stewart@cs.umd.edu). G. W. Stewart's research was
supported in part by the US National Science Foundation under grant CCR 95503126.
G. W. STEWART
weak for many matrices. He originated a general approach to obtaining provably tight
results and sharp condition numbers for these problems. We will call this the "matrix-
vector equation" approach. For the QR factorization this involves expressing (1.1) as
a matrix-vector equation of the form
where WR and ZR are matrices involving the elements of R, and vec(\Delta) transforms its
argument into a vector of its elements. The notation uvec(\Delta) denotes a variant of vec(\Delta)
defined in Section 5.2. Chang also realized that the condition of many such problems
was significantly improved by pivoting, and provided the first firmly based theoretical
explanations as to why this was so.
G. W. Stewart [11, 3] was stimulated by Chang's work to understand this more
deeply, and present simple explanations for what was going on. Before Chang's work,
the most used approach to perturbation analyses of factorizations was what we will
call the "matrix equation" approach, which keeps equations like (1.1) in their matrix-matrix
form. Stewart [11] (also see [3]) used an elegant construct, partly illustrated by
the "up" and "low" notation in Section 2, which makes the matrix equation approach
a far more usable and intuitive tool. He combined this with deep insights on scaling
to produce new matrix equation analyses which are appealingly clear, and provide
excellent insight into the sensitivities of the problems. These new matrix equation
analyses do not in general provide tight results like the matrix-vector equation analyses
do, but they are usually more simple, and provide practical estimates for the true
condition numbers obtained from the latter.
It was the third author, Chris Paige, who insisted on writing this brief history,
and delineating these exceptional contributions of Chang and Stewart. It requires a
combination of the two analyses to provide a full understanding of the cases we have
examined. The interplay of the two approaches was first revealed in [3], and we hope
to convey it even more clearly here for the QR factorization.
The perturbation analysis for the QR factorization has been considered by several
authors. The first norm-based result for R was presented by Stewart [9]. That was
further modified and improved by Sun [13]. Using different approaches Sun [13] and
Stewart [10] gave first order normwise perturbation analyses for R. A first order
componentwise perturbation analysis for R has been given by Zha [17], and a strict
componentwise analysis for R has been given by Sun [14]. These papers also gave
analyses for Q 1 . More recently Sun [15] gave strict perturbation bounds for Q 1 alone.
The purpose of this paper is to establish new first order perturbation bounds
which are generally sharper than the equivalent results for the R factor in [10, 13],
and more straightforward than the sharp result in [15] for the Q 1 factor.
In Section 2 we define some notation and give a result we will use throughout the
paper. In Section 3 we will survey important key results on the sensitivity of R and Q 1 .
In Section 4 we give a refined perturbation analysis for showing in a simple way
why the standard column pivoting strategy for A can be beneficial for certain aspects
of the sensitivity of Q 1 . In Section 5 we analyze the perturbation in R, first by
the straightforward matrix equation approach, then by the more detailed and tighter
matrix-vector equation approach. We give numerical results and suggest practical
condition estimators in Section 6, and summarize and comment on our findings in
Section 7.
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 3
2. Notation and basics. To simplify the presentation, for any n \Theta n matrix X,
we define the upper and lower triangular matrices
so that up(X). For general X
For symmetric X
F \Gamma2
To illustrate a basic use of "up", we show that for any given n \Theta n nonsingular upper
triangular R and any given n \Theta n symmetric M , the equation of the form (cf. (1.1))
always has a unique upper triangular solution U . Since UR \Gamma1 is upper triangular in
we see immediately that
so UR \Gamma1 and therefore U is uniquely defined. We will describe other uses later.
Our perturbation bounds will be tighter if we bound separately the perturbations
along the columns space of A and along its orthogonal complement. Thus we introduce
the following notation. For real m \Theta n A, let P 1 be the orthogonal projector onto R(A),
and P 2 be the orthogonal projector onto R(A) ? . For real m \Theta n \DeltaA define
so
2 . When ffl ? 0 in (2.5) we also define
so for the QR factorization
We will use the following standard result.
Lemma 2.1. For real m \Theta n A with rank n, real
is the orthogonal projector onto R(A).
4 XIAO-WEN CHANG, CHRIS PAIGE AND G. W. STEWART
Proof. Let A have the QR factorization
which necessarily has full column rank if kQ T
(A), the smallest singular
value of A. But this inequality is just (2.8). 2
Corollary 2.2. If nonzero \DeltaA satisfies (2.8), then for ffl and G defined in (2.5)
and (2.6), A + tG has full column rank and therefore a unique QR factorization for
all jtj - ffl. 2
3. Previous norm-based results. In this section we summarize the strongest
norm-based results by previous authors. We first give a derivation of what is essentially
Sun's [13] and Stewart's [10] first order normwise perturbation result for R, since their
techniques and results will be useful later.
Theorem 3.1. [13]. Let A 2 R m\Thetan be of full column rank, with the QR factorization
be a real m \Theta n matrix.
has a unique QR
factorization
where
Proof. Let G j \DeltaA=ffl (if the theorem is trivial). From Corollary 2.2 A+tG
has the unique QR factorization
for all jtj - ffl, where
Notice that
It is easy to verify that Q 1 (t) and R(t) are twice continuously differentiable for
ffl from the algorithm for the QR factorization. Thus as in (1.1) we have
which (see (2.4)) is a linear equation uniquely defining the elements of -
in terms
of the elements of Q T
G. From upper triangular -
R(0)R \Gamma1 in
we see with (2.1) that
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 5
so with (2.3)
and since from (2.5) to (2.7) kQ T
The Taylor expansion for R(t) about
so that
which, combined with (3.8), gives (3.2). 2
This proof shows that the key point in deriving a first order perturbation bound
for R is the use of (3.5) to give a good bound on the sensitivity k -
we obtained the bounds directly from (3.7), this was a "matrix equation" analysis.
We now show how a recent perturbation result for Q 1 given by Sun [15] can be
obtained in the present setting, since the analysis can easily be extended to obtain a
more refined result in a simple way. Note the hypotheses of the following theorem are
those of Theorem 3.1, so we can use results from the latter theorem.
Theorem 3.2. [15]. Let A 2 R m\Thetan be of full column rank, with the QR factorization
be a real m \Theta n matrix.
holds, then A + \DeltaA has a unique QR factorization
where
Proof. Let G j \DeltaA=ffl (if the theorem is trivial). From Corollary 2.2 A+tG
has the unique QR factorization A(t)
for all jtj - ffl. Differentiating these at
It follows that
so with any Q 2 such that Q
6 XIAO-WEN CHANG, CHRIS PAIGE AND G. W. STEWART
Now using (2.1), we have with (3.7) in Theorem 3.1 that
We see from this, (2.2), (3.11), and kGk from (2.7), that
F
and from the Taylor expansion for
so that k\DeltaQ 1
4. Refined analysis for Q 1 . Note since both Q 1 and are orthonormal
matrices in (3.1), and \DeltaQ 1 is small, \DeltaQ so the following analysis
assumes n ? 1. The results of Sun [15] give about as good as possible overall bounds
on the change \DeltaQ 1 in Q 1 . But by looking at how \DeltaQ 1 is distributed between
and its orthogonal complement, and following the ideas in Theorem 3.2, we are able to
obtain a result which is tight but, unlike the related tight result in [15], easy to follow.
It makes clear exactly where any ill-conditioning lies. From (3.14) with
square and orthogonal,
and the key is to bound the first term on the right separately from the second. Note
from (3.11) with (2.5) to (2.7) that
where G can be chosen to give equality here for any given A. Hence is the true
condition number for that part of \DeltaQ 1 in R(Q 2
Now for the part of \DeltaQ 1 in R(Q 1 ). We see from (3.12) that
which is skew symmetric with clearly zero diagonal. Let R j and S j denote the leading
blocks of R and S respectively, G j the matrix of the first j columns of G, and
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 7
where s j has j-1 elements, then from the upper triangular form of R in (4.2)
ks
ks
F
for any R n\Gamma1 equality is obtained by taking
such that kR \GammaT
It follows that the bound is tight in
so the true condition number for that part of \DeltaQ 1 in R(Q 1 ) is not
In some problems we are mainly interested in the change in Q 1 lying in R(Q 1 ), and
this result shows its bound can be smaller than we previously thought. In particular if
A has only one small singular value, and we use the standard column pivoting strategy
in computing the QR factorization, then R n\Gamma1 will be quite well-conditioned compared
with R, and we will have kR \Gamma1
5. Perturbation analyses for R. In Section 3 we saw the key to deriving first
order perturbation bounds for R in the QR factorization of full column rank A is the
equation (3.5), which has the general form of finding (bounding) X in terms of given
R and F in the matrix equation
upper triangular, R nonsingular:
Sun [13] and Stewart [10] originally analyzed this using the matrix equation approach
to give the result in Theorem 3.1. We will now analyze it in two new ways. The
first, Stewart's [11, 3] refined matrix equation approach, gives a clear improvement on
Theorem 3.1, while the second, Chang's [2, 3] matrix-vector equation approach, gives
a further improvement still - provably tight bounds leading to the true condition
number -R (A) for R in the QR factorization of A. Both approaches provide efficient
condition estimators (see [2] for the matrix-vector equation approach), and nice results
for the special case of permutation matrix giving the standard
column pivoting, but we will only derive the matrix equation versions of these. The
tighter but more complicated matrix-vector equation analysis for the case of pivoting
is given in [2], and only the results will be quoted here.
5.1. Refined matrix equation analysis for R. Our proof of Theorem 3.1 used
(3.5) to produce the matrix equation (3.7), and derived the bounds directly from this.
We now look at this approach more closely, but at first using the general form (5.1)
to keep our thinking clear. From this we see
8 XIAO-WEN CHANG, CHRIS PAIGE AND G. W. STEWART
Let D n be the set of all n \Theta n real positive definite diagonal matrices. For any D =
R. Note that for any matrix B we have
up(BD). Hence if we define B
R
R \GammaT F T D) -
R:
With obvious notation, the upper triangular matrix
. To bound this, we use the
following result.
Lemma 5.1. For n \Theta n B and D 2 D n ,
where
1-i!j-n
Proof. Clearly
But by the Cauchy-Schwarz theorem,
We can now bound the solution X of (5.1)
Since this is true for all D 2 D n , we know that for the upper triangular solution X of
where i D is defined in (5.4). This gives the encouraging result
(5.
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 9
Comparing (3.5) with (5.1), we see for the QR factorization, with (2.5) to (2.7),
Hence
and from (3.9) for a change in A we have
where from (5.11) these are never worse than the bounds in Theorem 3.1.
When we use the standard column pivoting strategy in permutation
matrix, this analysis leads to a very nice result. Here the elements of R
so r 2
nn . If D is the diagonal of R then i D - 1, and from (5.9) and
But then it follows from [6, Theorem 8.13] that
so since k -
Thus when we use the standard pivoting strategy, we see the sensitivity of R is bounded
for any n.
Remark 5.1. Clearly -ME (A) is a potential candidate for the condition number
of R in the QR factorization. From (5.9), -ME (A) depends solely on R, but it will only
be the true condition number if for any nonsingular upper triangular R we can find an
F in (5.1) giving equality in (5.8). From (5.7) this can only be true if every column
of F T lies in the space of the right singular vectors corresponding to the maximum
singular value of -
R \GammaT . Such a restriction is in general too strong for (5.6) to be made
an equality as well (see the lead up to (5.5)). However for a class of R this is possible.
If R is diagonal, we can take
and the first restriction on F
disappears. Let i and j be such that i
, so from
G. W. STEWART
So we see that, at least for diagonal R, the bounds are tight, and in this restricted case
(A) is the true condition number.
This refined matrix equation analysis shows to what extent the solution X of
(5.1), and so the sensitivity of R in the QR factorization, is dependent on the row
scaling in
R. From the term D
R \GammaT F T )D in (5.2), we saw multipliers
occurred only with j ? i. As a result we obtained i D in our bounds rather than
with equality if and only if the minimum element comes before the maximum on the
diagonal. Thus we obtained full cancellation of D \Gamma1 with D in the first term on the
right hand side of (5.2), and partial cancellation in the second.
This gives some insight as to why R in the QR factorization is less sensitive than
the earlier condition estimator
indicated. If the ill-conditioning of R is mostly
due to bad scaling of its rows, then correct choice of D in
R can give - 2 ( -
very near one. If at the same time i D is not large, then -(R; D) in (5.10) can be
much smaller than
pivoting always ensures that such
a D exists, and in fact gives (5.14). However if we do not use such pivoting, then
Remark 5.1 suggests that any relatively small earlier elements on the diagonal of R
could correspond to poor conditioning of the factorization.
We will return to -ME (A) and -(R; D) when we seek practical estimates of the
true condition number that we derive in the next section.
5.2. Matrix-vector equation analysis for R. We can now obtain provably
sharp, but less intuitive results by viewing the matrix equation (5.1) as a large matrix-vector
equation. For any matrix C n\Thetan , denote by c (i)
j the
vector of the first i elements of c j . With this, we define ("u" denotes "upper")
c (1)c (2)\Delta
c (n)
It is the vector formed by stacking the columns of the upper triangular part of C into
one long vector.
To analyze (3.5) we again consider the general form (5.1), repeated here for clarity,
upper triangular, R nonsingular:
which we saw via (2.4) has a unique upper triangular solution X. The upper and
lower triangular parts of (5.1) contain identical information, and we now write the
upper triangular part in matrix-vector, rather than matrix-matrix format. The first
j elements of the jth column of (5.15) are given by
R T
f (j)Tf (j)T\Delta
f (j)T
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 11
and by rewriting this, we can see how to solve for x (j)
(R T
r (1)T
r (j \Gamma1)T
(R T
r (j)T
r (j)T
which, on dividing the last row of this by 2, gives
2 \Theta n(n+1)
r 11
and ZR 2 R n(n+1)
r 11
Since R is nonsingular, WR is also, and from (5.16)
R ZR vec(F
Remembering X is upper triangular, we see
R ZR vec(F )k 2
where for any nonsingular upper triangular R, equality can be obtained by choosing
vec(F ) to lie in the space spanned by the right singular vectors corresponding to the
G. W. STEWART
largest singular value of W \Gamma1
R ZR . It follows that (5.18) is tight, so from (5.8), derived
from the matrix equation approach, and (5.11)
Remark 5.2. Usually the first and second inequalities are strict. For exam-
ple, let
. Then we
solving an optimization
problem). But from Remark 5.1 the first inequality becomes an equality if R
is diagonal, while the second also becomes an equality if R is an n \Theta n identity matrix
with so the upper bound is tight.
The structure of WR and ZR reveals that each column of WR is one of the columns
of ZR , and so W \Gamma1
R ZR has an n(n + 1)=2 square identity submatrix, giving
Remark 5.3. This lower bound is approximately tight for any n, as can be seen
by taking for by taking
and (5.10), we see from Remark 5.1 that
These results, and the analysis in Section 4 for Q 1 , lead to our new first order
normwise perturbation theorem.
Theorem 5.2. Let
be the QR factorization of A 2 R m\Thetan
with full column rank, and let \DeltaA be a real m \Theta n matrix.
holds, then there is a
unique QR factorization
such that
where with WR and ZR as in (5.16), and -ME (A) as in (5.9) and (5.10),
and
where
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 13
Proof. From Corollary 2.2 A+ \DeltaA has the unique QR factorization (5.22). From
(3.5), (5.15), (5.17) with G j \DeltaA=ffl and (2.5) we have
R ZR vec(Q T
so taking the 2-norm gives
Combining this with
and
Thus, from the Taylor series (3.9) of R(t), (5.23) follows. The remaining results are
restatements of (5.20), (5.19), (4.1) and (4.3). 2
Remark 5.4. From (5.24) we know the first order perturbation bound (5.23) is
at least as sharp as (3.2), but it suggests it may be considerably sharper. In fact it can
be sharper by an arbitrary factor. Consider the example in Remark 5.3, where taking
and
We see the first order perturbation bound (3.2) can severely overestimate the effect of
a perturbation in A.
Remark 5.5. If we take
which is close to the upper bound
This shows that relatively
small early diagonal elements of R cause poor condition, and suggests if we do not
use pivoting, then there is a significant chance that the condition of the problem will
approach its upper bound, at least for randomly chosen matrices.
When we use standard pivoting, we see from (5.24) and (5.14)
but the following tighter result is shown in [2, Th. 2.2]
Theorem 5.3. Let A 2 R m\Thetan be of full column rank, with the QR factorization
when the standard column pivoting strategy is used. Then
14 XIAO-WEN CHANG, CHRIS PAIGE AND G. W. STEWART
There is a parametrized family of matrices A('), ' 2 (0; -=2]; for which
R ZR
Theorem 5.3 shows that when the standard column pivoting strategy is used, -R (AP )
is bounded for fixed n no matter how large is. Many numerical experiments
with this strategy suggest that -R (AP ) is usually close to its lower bound of one, but
we give an extreme example in Section 6 where it is not.
When we do not use pivoting, we have no such simple result for -R (A), and it
is, as far as we can see, unreasonably expensive to compute or approximate -R (A)
directly with the usual approach. Fortunately -ME (A) is apparently an excellent
approximation to -R (A), and -ME (A) is quite easy to estimate. All we need to do
is choose a suitable D in -(R; D) in (5.10). We consider how to do this in the next
section.
6. Numerical experiments and condition estimators. In Section 5 we presented
new first order perturbation bounds for the R factor of the QR factorization
using two different approaches, defined
R ZR k 2 as the true condition
number for the R factor, and suggested -R (A) could be estimated in practice by
-(R; D). Our new first order results are sharper than previous results for R, and at
least as sharp for Q 1 , and we give some numerical tests to illustrate both this, and
possible estimators for -R (A).
We would like to choose D such that -(R; D) is a good approximation to the
in (5.9), and show that this is a good estimate of the true condition
number -R (A). Then a procedure for obtaining an O(n 2 ) condition estimator for R
in the QR factorization (i.e. an estimator for -R (A)), is to choose such a D, use a
standard condition estimator (see for example [5]) to estimate - 2 (D \Gamma1 R), and take
-(R; D) in (5.10) as the appropriate estimate.
By a well known result of van der Sluis [16], - 2 (D \Gamma1 R) will be nearly minimal
when the rows of D \Gamma1 R are equilibrated. But this could lead to a large i D in (5.10).
There are three obvious possibilities for D. The first one is choosing D to equilibrate
R precisely. Specifically, take
n. The second one is
choosing D to equilibrate R as far as possible while keeping i D - 1. Specifically,
take
n. The third one is choosing Computations show that the third
choice can sometimes cause unnecessarily large estimates, so we will not give any
results for that choice. We specify the diagonal matrix D obtained by the first method
and the second method by D 1 and D 2 respectively in the following.
We give three sets of examples. The first set of matrices are n \Theta n Pascal matrices,
(with elements a 15. The results are
shown in Table 6.1 without pivoting, giving and in Table 6.2 with pivoting,
giving
~
R. Note in Table 6.1 how
can be far worse than the true
condition number -R (A), which itself can be much greater than its lower bound of
1. In
Table
6.2 pivoting is seen to give a significant improvement on -R (A), bringing
very close to its lower bound, but of course
still. Also
we observe from Table 6.1 that both -(R; D 1 ) and -(R; D 2 ) are very good estimates for
-R (A). The latter is a little better than the former. In Table 6.2 -( ~
(in fact are also good estimates for -R (AP ).
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 15
Table
Results for Pascal matrices without pivoting,
7 6.6572e+02 4.4665e+03 4.2021e+03 2.1759e+05 2.1120e+06
8 2.4916e+03 1.8274e+04 1.6675e+04 2.7562e+06 2.9197e+07
9 9.4091e+03 7.4364e+04 6.6391e+04 3.6040e+07 4.1123e+08
2.0190e+06 1.9782e+07 1.6614e+07 1.2819e+12 1.8226e+13
14 7.7958e+06 7.9545e+07 6.6157e+07 1.8183e+13 2.6979e+14
Table
Results for Pascal matrices with pivoting,
~
R
3 1.2892e+00 2.1648e+00 2.1648e+00 1.2541e+01 8.7658e+01
6 2.2296e+00 4.7281e+00 4.7281e+00 7.5426e+03 1.5668e+05
7 2.0543e+00 5.1140e+00 5.1140e+00 8.4696e+04 2.1120e+06
8 2.6264e+00 6.5487e+00 6.5487e+00 1.1777e+06 2.9197e+07
9 3.4892e+00 8.8383e+00 8.8383e+00 1.4573e+07 4.1123e+08
14 3.6106e+00 1.2386e+01 1.2386e+01 5.3554e+12 2.6979e+14
The second set of matrices are 10 \Theta 8 A j , which are all obtained
from the same random 10 \Theta 8 matrix (produced by the MATLAB function randn), but
with its jth column multiplied by 10 \Gamma8 to give A j . The results are shown in Table 6.3
without pivoting. All the results with pivoting are similar to that for
6.3, and so are not given here. For
(A) are both close to
their upper bound
are significantly
smaller than
these results are what we expected, since the matrix R is
ill-conditioned due to the fact that r jj is very small, but for the rows
of R are already essentially equilibrated, and we do not expect -R (A) to be much
G. W. STEWART
better than
(A). Also for the first seven cases the smallest singular value of the
leading part R n\Gamma1 is close to that of R, so that -Q 1
(A) could not be much better
than
(A). For even though R is still ill-conditioned due to the fact that
r 88 is very small, it is not at all equilibrated, and becomes well-conditioned by row
scaling. Notice at the same time i D is close to 1, so -(R; D1), -(R; D 2 ), and therefore
are much better than
(A). In this case, the smallest singular value of R
is significantly smaller than that of R n\Gamma1 . Thus -Q 1 (A), the condition number for the
change in Q 1 lying in the range of Q 1 , is spectacularly better than
(A). This is a
contrived example, but serves to emphasize the benefits of pivoting for the condition
of both Q 1 and R.
Table
pivoting
5 1.2467e+08 3.1208e+08 2.3992e+08 3.9067e+08 4.2323e+08
6 8.8237e+07 2.2252e+08 1.6584e+08 3.4710e+08 3.9061e+08
7 9.2648e+07 2.1010e+08 1.7127e+08 4.4303e+08 5.4719e+08
8 2.2580e+00 5.4735e+00 4.9152e+00 6.6109e+00 6.2096e+08
The third set of matrices are n \Theta n upper triangular
These matrices were introduced by Kahan [7]. Of course
I here, but the condition numbers depend on R only, and these are all we are
interested in. The results for are shown in Table 6.5.
Again we found D only list the results corresponding to D 1 .
Table
Results for Kahan matrices,
In all these examples we see -(R; D 1 ) and -(R; D 2 ) gave excellent estimates for
-R (A), with -(R; D 2 ) being marginally preferable. For the Kahan matrices, which
correspond to correctly pivoted A, we see that in extreme cases, with large enough n,
-R (A) can be large even with pivoting. This is about as bad a result as we can get
PERTURBATION ANALYSES FOR THE QR FACTORIZATION 17
with pivoting (it gets a bit worse as ' ! 0 in R), since the Kahan matrices are the
parameterized family mentioned in Theorem 5.3. Nevertheless -(R;
still estimate -R (A) excellently.
7. Summary and conclusion. The first order perturbation analyses presented
here show just what the sensitivity (condition) of each of Q 1 and R is in the QR
factorization of full column rank A, and in so doing provide their true condition
numbers (with respect to the measures used, and for sufficiently small \DeltaA), as well
as efficient ways of approximating these. The key norm-based condition numbers we
derived for A are:
for that part of \DeltaQ 1 in R(A) ? , see (4.1),
that part of \DeltaQ 1 in R(A), see (4.3),
R ZR k 2 for R, see Theorem 5.2,
ffl the estimate for -R (A), that is -ME (A) j inf D2Dn -(R; D),
where -(R; D) j
The condition numbers obey
for while for R
see (5.24). The numerical examples, and an analysis of the case (not given here),
suggest that -(R; D), with D chosen to equilibrate -
subject to i D - 1, gives
an excellent approximation to -R (A) in the general case. In the special case of A with
orthogonal columns, so R is diagonal, then Remark 5.1 showed by taking
For general A when we use the standard column pivoting strategy in the QR factor-
ization, we saw from (5.14) and [2] that
As a result of these analyses we see both R and in a certain sense Q 1 can be
less sensitive than was thought from previous analyses. The true condition numbers
depend on any column pivoting of A, and show that the standard pivoting strategy
often results in much less sensitive R, and sometimes leads to a much smaller possible
change of Q 1 in the range of Q 1 , for a given size of perturbation in A.
The matrix equation analysis of Section 5.1 also provides a nice analysis of an
interesting and possibly more general matrix equation (5.1).
By following the approach of Stewart [8, Th. 3.1], see also [12, Th. 2.11], it would
be straightforward, but detailed and lengthy, to extend our first order results to provide
strict perturbation bounds, as was done in [3]. We could also provide new component-wise
bounds, but we chose not to do either of these here, in order to keep the material
and the basic ideas as brief and approachable as possible. Our condition numbers and
resulting bounds are asymptotically sharp, so there is less need for strict bounds. A
new componentwise bound for R is given in [2].
G. W. STEWART
Acknowledgement
. We would like to thank Ji-guang Sun for his suggestions
and encouragement, and for providing us with draft versions of his work.
--R
PhD Thesis
A perturbation analysis for R in the QR factorization
New perturbation analyses for the Cholesky fac- torization
Matrix Computations
A survey of condition number estimation for triangular matrices
Accuracy and Stability of Numerical Algorithms
and perturbation bounds for subspaces associated with certain eigenvalue problems
Perturbation bounds for the QR factorization of a matrix
On the perturbation of LU
On the Perturbation of LU and Cholesky Factors
Matrix perturbation theory
Perturbation bounds for the Cholesky and QR factorization
Componentwise perturbation bounds for some matrix decompositions
On perturbation bounds for the QR factorization
Condition numbers and equilibration of matrices
A componentwise perturbation analysis of the QR decomposition
--TR
--CTR
Younes Chahlaoui , Kyle A. Gallivan , Paul Van Dooren, An incremental method for computing dominant singular spaces, Computational information retrieval, Society for Industrial and Applied Mathematics, Philadelphia, PA, 2001 | matrix equations;pivoting;condition estimation;QR factorization;perturbation analysis |
263442 | Optimal Local Weighted Averaging Methods in Contour Smoothing. | AbstractIn several applications where binary contours are used to represent and classify patterns, smoothing must be performed to attenuate noise and quantization error. This is often implemented with local weighted averaging of contour point coordinates, because of the simplicity, low-cost and effectiveness of such methods. Invoking the "optimality" of the Gaussian filter, many authors will use Gaussian-derived weights. But generally these filters are not optimal, and there has been little theoretical investigation of local weighted averaging methods per se. This paper focuses on the direct derivation of optimal local weighted averaging methods tailored towards specific computational goals such as the accurate estimation of contour point positions, tangent slopes, or deviation angles. A new and simple digitization noise model is proposed to derive the best set of weights for different window sizes, for each computational task. Estimates of the fraction of the noise actually removed by these optimum weights are also obtained. Finally, the applicability of these findings for arbitrary curvature is verified, by numerically investigating equivalent problems for digital circles of various radii. | Introduction
There are numerous applications involving the processing of 2-D images, and 2-D views
of 3-D images, where binary contours are used to represent and classify patterns of inter-
est. Measurements are then made using the contour information (e.g. perimeter, area,
moments, slopes, curvature, deviation angles etc. To obtain reliable estimates of these
quantities, one must take into account the noisy nature of binary contours due to discrete
sampling, binarization, and possibly the inherent fuzziness of the boundaries themselves 1 .
In some cases, this can be done explicitly and exhaustively (see Worring & Smeulders [1]
on curvature estimation). But more frequently it is done implicitly by smoothing. Following
this operation, the measurements of interest can be obtained directly from the
smoothed contour points, as in this paper, or from a curve fitted to these points. For a
recent example of this last approach, see Tsai & Chen [2].
The smoothing of binary contours or curves for local feature extraction directly from
1 For example, the "borders" of strokes in handwriting.
the discrete data is the focus of this article. More precisely, we will investigate optimum
local weighted averaging methods for particular measurement purposes such as estimating
point positions, derivatives (slopes of tangents), and deviation angles from point to point
(see Fig. 2).
In this article, the following definitions will be used. Let
be the sequence of N points (4- or 8-connected) around the closed contour. Since the
contour is cyclic,
be the counter-clockwise elevation angle between v i and the horizontal x-axis. We have
4 where c i is the Freeman [3] chain code (see Fig. 1). For 4-connectivity, the
values of c i are limited to even values. We also define d i , the differential chain code, as
\Gamma'@
@
@
@I
oe
\Gamma\Psi
@
@
@
Figure
1: Freeman chain code
\Gamma\Psit
Figure
2: Deviation angle at p i
The deviation angle at point p i will be denoted OE i . It is the angle between the small
vectors v i and v i+1 (See Fig. 2). Of course we have OE
4 .
Finally, the local weighted averaging method investigated in sections II, III, and IV is
defined as:
j=\Gamman
i is the contour point i after k smoothing steps (p (0)
The window size is
A. Variety of Approaches Used in Applications
Due to limited computing power, early methods were quite simple and found justification
in their "good results". Thus we find schemes removing/filling one-pixel wide
protrusions/intrusions based on templates, or replacing certain pairs in the chain code
sequence by other pairs or by singletons ([4]). However, from early on, local weighted averaging
methods are the most frequently used. They are applied to differential chain codes
([5], [6], [7]), possibly with compensation for the anisotropy of the square grid ([8]); they
are applied to cartesian coordinates ([9]), possibly with weights depending on neighboring
pixel configuration ([10]) or varying with successive iterations ([11]); they are applied to
deviation angles ([12]).
With advances in computing power and insight into the smoothing problem, more complex
methods were developed with more solid theoretical foundations. In this process,
"Gaussian smoothing" has become very popular. One approach consists of applying local
weighted averaging with Gaussian weights. Dill et al. [13] use normalized Gaussian-
smoothed differential chain codes with fixed oe and window size. In Ansari & Huang [14],
the weights and window size may vary from point to point based on
e
region of support. See also Pei
Variable amounts of smoothing can be applied to the entire curve, taking the overall
behaviour of the smoothed curve across scale as its complete description. Witkin [17]
convolves a signal f(x) with Gaussian masks over a continuum of sizes:
2-oe
F (x; oe) is called the scale-space image of f(x) and it is analyzed in terms of its inflection
points. This concept of scale-space was also originally explored by Koenderink [18].
Asada & Brady [19] analyze the convolution of simple shape primitives with the first
and second derivatives of G(x; oe) and then use the knowledge gained to extract these
shape primitives from the contours of objects. Mokhtarian & Mackworth [20] compute
November 20, 1997
the Gaussian-smoothed curvature using
Y (t; oe) -
where X(t; oe) and Y (t; oe) are the coordinates (x(t),y(t)) convolved with a Gaussian filter
G(t; oe). The locus of points where -(t; is called the generalized scale-space image
of the curve which they use for image matching purposes. Wuescher & Boyer [21] also use
Gaussian-smoothed curvature but with a single oe to extract regions of constant curvature
from contours.
Multiscale shape representations are not necessarily associated with Gaussian smoothing.
Saint-Marc et al. [22] propose scale-space representations based on adaptive smoothing,
which is a local weighted averaging method where the smoothed signal S (k+1) (x) after
smoothing steps is obtained as:
\Gamma(x)X
Several multiscale shape representations based on non-linear filters have also been used
successfully. Maragos [23] investigated morphological opening/closing filters depending
on a structuring element and a scale parameter; using successive applications of these
operators and removing some redundancy, a collection of skeleton components (Reduced
Skeleton Transform) can be generated which represents the original shape at various scales
more compactly than multiscale filtered versions. See also Chen & Yan [24].
Recently Bangham et al. [25] used scale-space representations based on M - and N -
sieves. For any 1D discrete signal f denoting as C r the set of all intervals of r
consecutive integers, they define:
Using the N-sieves of f are the sequence: f
1. The M-sieves of f are defined similarly, using M. Applying sieves horizontally and
vertically to 2D images appears to preserve edges and reject impulsive noise better than
Gaussian smoothing.
B. Theoretical Foundations
Regularization theory and the study of scale-space kernels are the two main areas which
have provided insight into the special qualities of the Gaussian kernel for smoothing pur-
poses. As will be seen, they do not warrant unqualified statements about the 'optimality'
of Gaussian smoothing.
Consider a one-dimensional function g(x), corrupted by noise j(x). The observed signal
is then Assume that the information available is a sampling of this
signal obtained for One approach to
estimating g(x) is to find f(x) which minimizesn
Z xn
where - is the regularization parameter. The solution is a smoothing spline of order 2m
(see [26], [27]).
For equally spaced data and et al. [28] have shown that the cubic spline
solution is very similar to a Gaussian. Canny's paper on edge detection[29] is also cited to
support the optimality of Gaussian filtering. But the Gaussian is only an approximation
to his theoretically obtained optimal filter.
Babaud et al. [30] have considered the class of infinitely differentiable kernels g(x; y)
vanishing at infinity faster than any inverse of polynomial, and one-dimensional signals
f(x) that are continuous linear functionals on the space of these kernels. In this class,
they have shown that only the Gaussian g(x;
can guarantee that all
first-order maxima (or minima) of the convolution
will increase (or decrease) monotonically as y increases.
Yuille & Poggio [31] extended the previous result by showing that, in any dimension,
the Gaussian is the only linear filter that does not create generic zero crossings of the
Laplacian as the scale increases. In the literature, such demonstrations are often designated
as the scaling theorem. Mackworth & Mokhtarian [32] derived a similar result concerning
their curvature scale-space representation. Wu & Xie [33] developed an elementary proof
of the scaling theorem based on calculus, assuming signals represented by polynomials.
Very recently, Anh et al. [34] provided another proof applicable to the broader class of
band-limited signals and to a larger class of filtering kernels, by relaxing the smoothness
constraint.
This property of the Gaussian filter was established for continuous signals. Lindeberg [35]
has studied the similar problem for discrete signals. He postulated that scale-space should
be generated by convolution with a one-parameter family of kernels and that the number
of local extrema in the convolved signal K f should not exceed the number of local
extrema in the original signal. Imposing a semigroup requirement, he found the unique
one-parameter family of scale-space kernels T (n; t) with a continuous scale parameter t;
as t increases, it becomes less and less distinguishable from the discretized Gaussian.
Recently, Pauwels et al. [36] demonstrated that imposing recursivity and scale-invariance
on linear, isotropic, convolution filters narrows down the class of scale-space kernels to a
one-parameter family of filters but is not restrictive enough to single out the Gaussian. The
latter results for a parameter value of 2; for higher values, the kernel has zero crossings.
Pauwels et al. also derive Lindeberg's results as special cases of theirs.
Finally, Bangham et al. [25] showed that discrete 1D M-sieves and N-sieves do not
introduce new edges or create new extrema as the scale increases. In addition, Bangham
et al. [37] have proven that when differences are taken between sieving operations applied
recursively with increasing scale on 1D discrete signals, a set of granule functions are
obtained which map back to the original signal. Furthermore, the amplitudes of the
granules are, to a certain extent, independent of one another.
C. Practical Considerations
For practical applications, regardless of the smoothing method one decides to use, some
concrete questions must eventually be answered. For the regularization approach, what
November 20, 1997
value should be used for -? For Gaussian smoothing, what value of oe and what finite
window size? When scale-space representation is used, if we say that significant features
are those which survive over "a wide range of scale", we must eventually put some actual
figures on this 'wide' range. These decisions can be entirely data-driven or based on
prior experience, knowledge of particular applications etc. In the end, they may play a
significant role in both the performance of the selected method and its implementation
cost. We now briefly present some of these aspects.
In [38] [39], Shahraray & Anderson consider the regularization problem of equation 8,
4, and they argue that finding the best value of - is critical. For this purpose,
they propose a technique based on minimizing the cross-validation mean square error
(g [k]
where g [k]
n;- is the smoothing spline constructed using all samples except y k , and is then
used to estimate y k . The method is said to provide a very good estimate of the best -,
for equally-spaced periodic data assuming only a global minimum. Otherwise, a so-called
generalized cross-validation function must be used.
The presence of discontinuities to be preserved in the contours of interest brings more
complexity into the optimal smoothing problem. One possible solution was already men-
tioned: the adaptive smoothing of Saint-Marc et al. [22]. For one-dimensional regularization
which preserves discontinuities, see Lee & Pavlidis [40]. For two-dimensional regularization
which preserves discontinuities, see Chen & Chin [41]. For Gaussian smoothing
which preserves discontinuities, see the methods of Ansari & Huang [14] and of Brady et
al. [42].
Another problem is that repeated convolution of a closed curve with a kernel may not
yield a closed contour or may cause shrinkage. For simple, convex, closed curves, Horn &
Weldon [43] propose a new curve representation which guarantees closed curves. Mackworth
Mokhtarian [32] offer a different solution involving the reparametrization of the
Gaussian-smoothed curve by its normalized arc length parameter. Lowe [44] attenuates
shrinking by adding a correction to the Gaussian-smoothed curve. Oliensis [45] applies
FFT to the signal and resets amplitudes to 0 only for frequencies larger than some thresh-
9old. Recently, Wheeler & Ikeuchi [46] present a new low-pass filter based on the idea
of iteratively adding smoothed residuals to the smoothed signals. Results are said to be
comparable to Oliensis' and somewhat better than Lowe's.
possible solution to the high computation and storage
requirements of generating "continuous" scale-space. They show that an optimal L 1 approximation
of the Gaussian can be obtained with a finite number of basis Gaussian filters:
from which scale-space can be constructed efficiently.
The above discussion exemplifies the potential complexity involved in implementing
methods. Clearly, in practice, one should not lose track of the cost of these
operations and how much smoothing is really required by the application of interest. It
is not always necessary to attain the ultimate precision in every measurement. In many
situations, simple and fast methods such as local weighted averaging with fixed weights
and a small window size, will provide a very satisfactory solution in only 2 or 3 iterations
(see [11], [5], [12]). Moreover, there is often little difference in the results obtained via
different methods. Thus, Dill et al. [13] report similar results when a Gaussian filter and
a triangular (Gallus-Neurath) filter of the same width are applied to differential chain
codes; in Kasvand & Otsu [48], rectangular, triangular, and Gaussian kernels, with the
same standard deviation, yield comparable outcomes (especially the latter two) for the
smooth reconstruction of planar curves from piecewise linear approximations.
D. Present Work
Our interest in contour smoothing stems from practical work in handwriting recognition.
In this and other applications, the discrimination of meaningful information from 'noise'
is a complex problem which often plays a critical role in the overall success of the system.
This filtering process can be handled across several stages (preprocessing, feature extrac-
tion, even classification), with different methods. In the preprocessing stage of one of our
recognition schemes [49], a triangular filter with first applied to contours of
characters before deviation angles OE i were computed. Satisfactory results were obtained.
Nevertheless, we were curious about the optimality of our choice of local weights.
Initial review revealed that, in many practical applications, the smoothing operation
is still performed by some local weighted averaging schemes 2 because they are simple,
fast, and effective (see for example [14], [50], [51], [52], [21]). However, little theoretical
investigation of these methods per se has been conducted. Some authors rapidly invoke the
'optimality' of Gaussian filtering and use Gaussian-derived weights. Their results may be
satisfactory as the Gaussian may be a good approximation to the 'optimum' filter, but its
discretization and truncation may cause it to further depart from 'optimum' behaviour.
In this paper, we assume local weighted averaging with constant weights as a starting
point and we investigate how these smoothing methods handle small random noise. To
this end, we propose a simple model of a noisy horizontal border. The simplicity of
the model allows a very pointed analysis of these smoothing methods. More precisely,
for specific computational goals such as estimating contour point positions, derivatives
(slopes of tangents), or deviation angles from the pixels of binary contours, we answer the
following questions: what are the optimum fixed weights for a given window size? and
what fraction of the noise is actually removed by these optimum weights?
After deriving these results, we offer experimental evidence that their validity is not
restricted to the limited case of noisy horizontal borders. This is done by considering digital
circles. For each particular computational task, we find very close agreement between the
optimum weights derived from our simple model and the ones derived numerically for
circles over a wide range of radii.
An important side-result concerns the great caution which should be exercised in speaking
of 'optimal' smoothing. Even for our simple idealized model, we find that the smoothing
coefficients which best restore the original noise-free pixel positions are not the same
which best restore the original local slope, or the original local deviation angles; further-
more, the best smoothing coefficients even depend on the specific difference method used
to numerically estimate the slope. Hence, in choosing smoothing methods, researchers
should probably first consider what it is they intend to measure after smoothing and in
what manner.
In relation to this, we point out the work of Worring & Smeulders [1]. They analyze
2 Once the Gaussian is discretized and truncated, it also simply amounts to a local weighted averaging method
with particular weight values.
noise-free digitized circular arcs and exhaustively characterize all centers and radii which
yield a given digitization pattern; by averaging over all these, an optimum measure of
radius or curvature can be obtained. If radius or curvature is the measurement of interest
and if utmost precision is required (with the associated computing cost to be paid), then
their approach is most suitable.
Our work in contrast is not oriented towards measuring a single attribute. We focus
on measurements such as position, slope and deviation angles because they are often of
interest. But our model and approach can be used to investigate other quantities or other
numerical estimates of the same quantities. The methods may be less accurate but they
will be much less costly, and optimum in the category of local weighted averaging methods.
The requirements of specific applications should dictate what is the best trade-off.
The rest of the article is organized as follows. The next section briefly describes the
methods investigated and provides a geometric interpretation for them. In section III, the
simple model of a noisy horizontal border is used to derive optimal values of the smoothing
parameters, in view of the above-mentioned computational goals. Finally, the applicability
of our findings for varying curvature is explored experimentally in section IV.
II. Local Weighted Averaging
We begin our study of local weighted averaging, as defined by Eq. 1, with window size
course, the smoothed contour points p (k)
smoothing iterations,
can be obtained directly from the original points p i as
j=\Gamman 0
corresponding to a window size w 1, and the fi's are
functions of the ff's and of k. The form of Eq. 1 is often computationnally more convenient.
However, as long as k and n are finite, the study of local weighted averaging need only
consider the case of a single iteration with finite width filters. When this is done, we will
use the simpler notation p 0
i instead of p (1)
We now impose a simple requirement to this large family of methods. Since our goal
is to smooth the small 'wiggles' along boundaries of binary images, it seems reasonable
to require that when p i and its neighbouring contour pixels are perfectly aligned, the
November 20, 1997
smoothing operation should leave p i unchanged. In particular, consider the x-coordinates
of consecutive horizontally-aligned pixels from p i\Gamman to p i+n . For
Our requirement that x 0
j=\Gamman
j=\Gamman
For this to hold whatever the value of x i , we must have
j=\Gamman
Thus our requirement is equivalent to a normalization condition and a symmetry constraint
on the ff's.
A. Geometric Interpretation
It is a simple matter to find a geometric interpretation for local weighted averaging.
Using the above conditions, Eq. 1 can be rewritten as:
Fig. 3. Geometric interpretation for
For a single iteration of the simplest method 1), the last Eq. reduces to
The points p are generally not aligned and the situation is illustrated in
Fig. 3, where m is the middle of the base of the triangle. Eq.
implies that the smoothed point
i is always on the median of the triangle from point p i .
Furthermore, the effect of the unique coefficient ff is clear since jp i p 0
As
ff varies continuously from 0 to 0.5, p 0
i 'slides' from p i to m i1 .
Similarly, in the more general situation, the vectors [ p (k\Gamma1)
are the medians
from p (k\Gamma1)
i of the triangles \Deltap (k\Gamma1)
i+j . Eq. 15 indicates that the smoothed
point
i is obtained by adding to p (k\Gamma1)
i a weighted sum of the medians of these triangles,
using 2ff j as weights. Thus, in a geometric sense, local weighted averaging as a contour
smoothing method could be renamed median smoothing.
III. Optimum Results from a Simple Digitization Noise Model
This section addresses the question "If local weighted averaging is considered, what
constant coefficients ff j should be used for smoothing binary contours in view of specific
computational goals?". We develop an answer to this question, based on a simple model:
an infinite horizontal border with random 1-pixel noise.
Why use this model? Of course, we do not consider the horizontal line to be a very
general object. Nor do we think that noise on any particular binary contour is a random
phenomenom. We have noticed in our work that binary contours often bear small noise,
commonly "1-pixel wiggles". Our goal is to perform an analytical study of the ability of
local weighted averaging smoothing methods to remove such noise. Since the filters are
meant to be used with arbitrary binary contours, it seems reasonable to consider that over
a large set of images noise can be considered random.
Furthermore, we do not make the very frequent implicit assumption that a smoothing
filter can be optimal independently of the specific attributes one intends to measure or
even the specific numerical estimation method used. For specific measurements and computation
methods, we would like to find the best choice of smoothing coefficients for a
given window size and an estimate of how much noise these coefficients remove; if the window
size is increased 3 , what are then the best coefficients and how much more is gained
compared to the smaller window size?
These questions are very pointed and we have no workable expression for small random
3 assuming the feature structure scale allows this.
noise on an arbitrary binary contour which would allow to derive answers analytically. Thus
we choose to look at an ideal object for which we can easily model random 1-pixel noise
and our study can be carried out. Similar approaches are often followed. For example,
in studying optimal edge detectors, Canny [29] considers the ideal step edge. There is
no implication that this is a common object to detect in practice; simply it makes the
analytical investigation easier and can still allow to gain insight into the edge detection
problem more generally. The practicality of our own findings concerning optimal local
weighted averaging will be verified in section IV. We now give a definition for our simple
model.
The infinite horizontal border with random 1-pixel noise consists of all points
Z, satisfying
with probability
with probability p
An example of such a simple noisy boundary is shown in Fig. 4.
y
Fig. 4. Noisy Horizontal Border
With this model, x It then follows from Eq. 15 that the
smoothing operation will not change the x-coordinates and the local weighted averaging
will only affect the y-coordinates. The best fitting straight line through this initial data is
easy to obtain since it must be of the form y. It is obtained by minimizing the mean
square distance
with respect to ~
y. The best fitting line is simply We will consider this to be
the Eq. of the ideal border which has been corrupted by the digitization process, yielding
the situation of Eq. 18.
We now examine the problem of applying "optimal" local weighted averaging to the
data of our simple model. Our aim is to eliminate the 'wiggles' along the noisy horizontal
border as much as possible. An alternate formulation is that we would want the border,
after smoothing, to be "as straight as possible" and as close as possible to
Several criteria can be used to assess the straightness of the border and optimize the
smoothing process:
ffl Minimize the mean square distance to the best fitting line after the data has been
smoothed.
ffl Minimize the mean square slope along the smoothed data points (ideally, the border
is straight and its slope should be 0 everywhere).
ffl Minimize the mean square deviation angle OE i (see Fig. 2) along the smoothed data
(ideally, OE i should also be 0 everywhere).
Each of the above criteria is sound and none can be said to be the best without considering
the particular situation further. The first criterion is the most commonly used in the curve
fitting literature. In this paper however, we want to derive optimal smoothing methods
tailored for specific computational tasks; hence, we will consider each of the above criteria
in turn. If our interest is simply to obtain numerical estimates of the slopes at contour
points, the optimal ff j 's derived based on the second criterion should be preferred. And
for estimating deviation angles OE i , the optimal coefficients derived from the third criterion
would be better.
For the first criterion, we will use d rms , the root mean square (r.m.s.) distance to the
best fitting line, as our measure of noise before and after the smoothing step; for the
second criterion, m rms , the r.m.s. slope along the border; with the third criterion, OE rms ,
the r.m.s. deviation angle along the border. For the original unsmoothed data, these noise
measures can be computed using the probabilities of the possible configurations of 2 or 3
consecutive pixels. Thus for the original, unsmoothed data we have:
r
using
Based on our simple model, we now derive the best smoothing parameters for each of
the three criteria mentioned above. Once obtained, we will compute the corresponding
noise measures for the smoothed data which we will denote by [w] d 0
rms
rms ; and [w] OE 0
rms
respectively. In this notation the 'prime' indicates a single smoothing step and w is the
window size used.
A. Best Parameters to Minimize d 0
rms
The unsmoothed y-coordinate of our border points is a discrete random variable following
a simple Bernouilli distribution for which the expected value is y and the variance
is Obviously, the best fitting line is simply the expected value and d rms in Eq.
20 is the standard deviation of y i . After smoothing, the expected value of y 0
(finite) window size w. Denoting the expected value by E, we have from
elementary probability theory:
E(y 0
E(
j=\Gamman
j=\Gamman
j=\Gamman
Now we find the best choice of smoothing parameters and the corresponding noise measure
rms . Denoting variance by oe 2 (), we have:
j=\Gamman
Each ff j y i+j is a discrete random variable (with two possible values) and its variance is
. Thus
j=\Gamman
We must now minimize
j=\Gamman
subject to the constraint ff
problem is typically solved using the Lagrange multipliers method (see [53], page 182).
from which we obtain the simple result ff for each k. All coefficients are equal,
hence of value 1=(2n+1) 4 . Substituting this value into Eq. 25, we obtain the corresponding
4 As expected, straight averaging reduces the variance of a collection of random variables faster than any other
weighted average.
2 . Our findings can be summarized as follows:
ffl For a single smoothing iteration with arbitrary window size w, for any value of p, the
best choice of parameters to minimize the mean square distance is to set all ff j 's to
1=w, resulting in [w] d 0
s
The fraction of the noise which is removed by the smoothing operation is
sw
Hence, for the noise is reduced by 42:3%; for by 55:3%; for
62:2%. Finally, contrary to what one might expect, we note that the optimum 5-point
smoothing operation is not to apply the optimum 3-point operation twice. The latter is
equivalent to a 5-point window with ff which gives d 0
This would remove approximately 51.6% of the noise.
B. Best Parameters to Minimize m 0
rms
In this section, we apply our second criterion for straightness and minimize the root mean
square value of the slope after smoothing. We consider two different ways of computing
the slope from contour points.
B.1 Based on m 0
The simplest numerical estimate of the slope is given by the forward difference formula
i . This gives
j=\Gamman
\Gamman y i\Gamman
ff \Gamman
Expanding the last squared summation and taking the mean, we obtain the following
\Gamman ff n y i+1+n y i\Gamman
\Gamman y 2
i\Gamman
November 20, 1997
ff \Gamman
\Gamman+1
For any s; t 2 Z; y s y
Substituting these
results, Eq. 27 can be further simplified by making use of Eq. 14. We obtain:
j=\Gamman
Using Eq. 14 again, some algebraic manipulation leads to an expression for
involving
only the n independent parameters ff
Our task now is to minimize the mean squared slope. Differentiating this with respect to
a system of n linear equations as shown below.B
ff 6
ff
Solutions are given in Table I for 1 - n - 6. The column before last gives the fraction of
the noise which is removed by the optimum smoothing method. For comparison purposes,
the last column provides the equivalent result when all weights are set equal to 1
w . Finally
we note that for window size 5, the triangular filter using ff results in
a noise reduction of 80.8%, slightly better than the equal-weights method.
fraction of noise removed
B.2 Based on m 0
A more accurate 5 estimate of the slope is given by m 0
Expanding it
in terms of the original coordinates and following the same approach as in the preceding
section, we arrive at:
j=\Gamman
When expressed in terms of the n independent ff's, this becomes:
Minimizing with respect to ff k , for 1 - k - n, we obtain another set of n linear equations
for which the solutions are listed in Table II for 1 - n - 6.
Note that the distribution of coefficients from ff \Gamman to ff n is no longer unimodal. Fur-
thermore, values of n. Finally we note that for window size w = 5, the
5 Provided the data resolution is fine enough.
and fraction of noise removed
triangular filter using ff results in a noise reduction of 66.7%, notably less
than the equal-weights method.
Parameters to Minimize Deviation Angles
In this section, we examine the smoothing problem based on minimizing the deviation
angles OE 0
. Here the problem is more complex and we will not obtain general expressions of
the optimum smoothing parameters which are independant of the probability p involved
in our model. We restrict our study to the cases 5.
Our definitions of section I imply that OE 0
. From trigonometry, we have
tan OE 0
Now tan ' 0
. Thus we can obtain tan OE 0
i from the
smoothed coordinates and then OE 0
i from the value of the tangent.
C.1 For
For
i at p 0
will depend on the original contour points in a 5-point neighbourhood
around p i . For our model of Eq. 18, there are 2 possible configurations for such
a neighbourhood, which must be examined for their corresponding OE 0
. Of course, these
November 20, 1997
computations need not be performed manually; they can be carried out using a language
for symbolic mathematical calculation.
Adding together the contributions from the 32 possible configurations, weighted by
the respective probabilities of these configurations, results in the following expression for
For simplicity we have dropped the subscript on the unique parameter ff 1 . Numerical
optimization was performed to find the value of ff which minimizes Eq. 34. No single
value of ff will minimize tan 2 OE 0
i for all values of p. The results are shown in Fig. 5(a).
The best value of ff is now a smooth function of p. However we note that the domain of
variation is very little.
We cannot compare the results obtained minimizing the mean squared tangent of OE 0
to the situation without smoothing, since tan 2 OE i is infinite. By taking the arc tangent
function of Eq. 33, we can obtain the values of the angles OE 0
themselves and we can derive
an expression for OE 0
2 in the same manner. Numerical optimization of this expression yields
the results shown in Fig. 5(b). As can be seen, they are almost the same as those of Fig.
5(a). In a similar fashion, we can generate expressions for j tan OE 0
i j, for which the
best smoothing parameters are shown in Fig. 5(c) and 5(d) respectively. Here there seems
to be one predominant best parameter over a wide range of values for p.
All the results shown in Fig. 5 were obtained numerically, for values of p ranging from
to 0.995, in steps of 0.005. As expected, all these curves are symmetric about
so we will limit our discussion to p ! 0:5. In Fig. 5(c), the best value of ff for
0:495), the best value is
Between these two intervals, p increases almost linearly. In Fig. 5(d), the same values of
ff are found: 0:25 is the best choice for 0:2857 is the best
choice for p 2 (0:150; 0:495).
In Fig. 5(a) and 5(b), the best value of ff is a smoothly varying function of p. But we
notice that 0:2857 is an intermediate value of ff in the narrow range of best values.
In fact, choosing all values of p, the value of tan 2 OE 0
i is always within 0.2%
of the minimum possible.
Despite the differences in the actual curves of Fig. 5, the corresponding ranges of best
ff's are always quite narrow and very similar, independently of the exact criterion chosen.
From now on, to maintain uniformity with the treatment of sections III-A and III-B, we
will restrict ourselves to minimizing the mean squared angle.
Eq. 22 provided a measure of the noise before smoothing: OE rms = -q
. The
fraction of this noise (1 \Gamma OE 0
rms =OE rms ) which is removed by a simple smoothing operation
with computed for different values of ff. The results are displayed in Fig. 6.
The solid line represents the best case and we see that approximately 73% of the r.m.s.
noise is removed. The dashed line, representing the case where ff = 0:2857 is used for all
values of p, is not distinguishable from the best case at this scale. The dash-dotted and the
dotted lines represent the fraction of noise removed for
In this last case, this fraction is a constant equal to 0.6655.
C.2 For
For possible configurations of a 7-point neighbourhood centered on p 0
must be considered to obtain the values of OE 0
after smoothing. In Eq. 34, there were 9
distinct terms involving p and ff. Now the computation of tan 2 OE 0
results in 35 distinct
terms in p, ff 1 , and ff 2 . We will not reproduce this lengthy expression here.
Fig. 7(a) presents the best choice of parameters to minimize OE 0
2 . We notice that there is
very little variation in their values over the range of values of p. The optimum parameters
are approximately ff These values are close to 2
9 and 1
9 , the
values for the triangular 5-point filter. Fig. 7(b) shows the fraction of OE rms removed by the
smoothing operation. The solid line represents the case where the optimum parameters are
used for each value of p. In this situation, approximately 88:75% of the noise is removed.
The dashed line represents the case ff
9 and ff
9 , for which 86% of the noise is
removed approximately. We see that these results are close to the ideal situation.
IV. Verifying Results for Varying Curvature
In the preceding section, we have studied optimum local weighted averaging extensively,
based on a model of a horizontal border with random 1-pixel noise. Particular solutions
were derived based on error criteria chosen in light of specific computational tasks to be
performed after the smoothing operation. But can these results be relied upon to handle
digitization noise along arbitrary contours?
Our results were obtained for a straight, horizontal border, i.e. a line of curvature 0.
But for arbitrary contours, curvature may vary from point to point. Should optimum
smoothing parameters vary with curvature and, if so, in what manner? For a given
window size, can a fixed set of smoothing parameters be found which will give optimum
(or near optimum) results across a wide range of curvature values? If so, how does this
set of parameters compare with the one we have derived using our simple model?
In this section, we try to answer these above questions by performing some experiments
with digital circles. It should be clear that our interest is not with digital circles per se
but rather, as explained above, with the variation of optimum smoothing parameters with
curvature. The approach will be to examine, for digital circles of various radii, situations
which are equivalent to the ones studied for the horizontal straight border in sections III-A,
III-B, and III-C. Using numerical optimization, we will find the best choice of smoothing
parameters for each situation, over a wide range of curvature values, and compare them
with the values obtained previously. An example of a digital circle is shown in Fig. 8 for
a radius R = 7.
A. Minimizing Error on Distances to Center
In this section, we consider the distances d i from the center to each pixel P i as approximations
to the radius R. See Fig. 9(a). After smoothing, pixel P i is replaced by pixel P 0
which is at a distance d 0
i from the center of the circle. Our aim is to find the values of the
smoothing parameters, for
these parameters might vary depending on the radius of the circles.
For reasons of symmetry, it is only necessary to consider one quadrant; with special
attention to the main diagonal, we can restrict our attention to the first octant of each
November 20, 1997
circle. Let N 1=8 be the number of pixels which are strictly within the first octant. The
mean value of (R \Gamma d 0
obtained by adding twice the sum of (R \Gamma d 0
points, plus the value for the pixel at coordinates (R; 0), plus the value for the pixel on
the main diagonal (if present). This sum is then divided by 2N 1=8
there is a pixel on the main diagonal).
We now give a simple example, for 3. First we consider the situation
before any smoothing is applied. For the point on the x-axis, the value of (R \Gamma d 0 ) 2 is
always 0. For the next point (4; 1), (R \Gamma d 1
. For the next point (3; 2),
(R
. Finally, for the point on the main diagonal (3; 3), (R \Gamma d 3
. The contributions for points (4; 1) and (3; 2) are counted twice and added to
the contributions for the diagonal point (3; 3). This sum is then divided by 6. Taking the
square root of the result, we obtain (R
After smoothing with the smallest window size, we have the following values for
(R
(R
(R
(R
and we must minimize the expression 1i
. Thus the
best smoothing parameter for is found to be
In the numerical computations it is possible to take advantage of the fact that, for small
window sizes, the smoothing rarely affects the y-coordinates in the first octant; exceptions
occur occasionally for the last pixel in the first octant (not on the diagonal) and for the
pixel on the diagonal when the preceding pixel has the same x-coordinate. This last
condition is found only for radii values of 1, 4, 11, 134, 373, 4552 etc. (see Kulpa [54]).
The coordinates of the pixels for the first octant of the digital circles were generated
using the simple procedure presented in Horn [55], with a small correction pointed out by
Kulpa [56] (see also Doros [57]). The best smoothing parameters were obtained for integer
values ranging from 2 pixels to 99 pixels, in steps of 1. The results are presented in
November 20, 1997
Fig. 9(b) for 5. For comparison, the values derived in section
III-A from our model of a noisy horizontal edge are shown with dashed lines.
For we see that for radii values larger than 20 pixels the best ff \Sigma1 oscillates around3
, as derived from our model. Similarly, for 5, the best values of ff \Sigma1 and ff \Sigma2 are
close to the predicted value of 0:2. For small radii values however, the optimum ff \Sigma2 -values
are much lower than this value and the optimum ff \Sigma1 -values are correspondingly higher.
This is easily understood since a 5-pixel neighbourhood covers a relatively large portion
of the circumference in these cases (as much as one eighth of the total circumference for a
radius of 6 pixels, one fourth for a radius of 3 pixels). In fact, for radii values of 2, 3, 4,
6, and 8 pixels, it is best to use ff using only the nearest neighbour.
Fig. 9(d) compares the r.m.s. values of the errors on the radii without smoothing (solid
line) to the best values possible after smoothing with window sizes of (dashed line)
and (dotted line) respectively.
For each value of the radius, we have also compared the noise reduction achieved using
the optimum parameters to that achieved with the constant values 1and 1. The results
are presented in Fig. 10(a) and 10(b), for respectively. For the
2 curves are indistinguishable for R ? pixels, and they are very close for R - 10. For
smoothing with ff actually
worse than no smoothing at all. But, for R ? 18, the best curve and that obtained with
these fixed values are very close.
Finally, for compares the
mean noise reduction of 3 methods: the optimum method, corresponding to the variable
parameters of Fig. 9(b) and Fig. 9(c); the fixed parameter method derived from our model
of section III; and the best fixed parameter method obtained from numerical estimates.
We see that the results are very close and that the method derived from our simple noise
model compares very well with the numerically determined best fixed parameter method.
B. Minimizing Error on Tangent Directions
In this section, we compare the direction of the tangent to a circle at a given point
to the numerical estimate of that direction, obtained for digital circles. The situation is
November 20, 1997
Window Method ff \Sigma1 ff \Sigma2 Mean noise reduction
optimum variable 0.484317
best fixed 0.3329 0.482673
optimum variable variable 0.6184
best fixed 0.2148 0.1703 0.6117
III
Mean noise reduction for
illustrated in Fig. 11(a).
Since the slope of the tangent is infinite at pixel (R; 0) of the digital circle, we will
consider instead the angle which the tangent line makes with the x-axis. The radius
from the center of the circle to pixel P i makes an angle ' i with the positive x-axis. Now
consider the point where this radius intersects the continuous circle. Theoretically, the
angle between the tangent to the circle at that point and the x-axis is -
On the
other hand, the numerical estimate of this angle is given by -
is the angle
between the horizontal axis and the perpendicular bisector of the segment from P i\Gamma1 to
Fig. 11(a)). The difference between these angles, (' is the error on the
elevation of the tangent to the circle at the point of interest. Our goal is to minimize the
r.m.s. value of (' 0
primes refer to the quantities after smoothing.
The values of ' i and ' i are readily computed in terms of the original coordinates of the
digital circle as follows:
The values of ' 0
are obtained similarly, in terms of the coordinates after smoothing.
Once again, the r.m.s. error for the entire circle can be computed by considering only
the first octant; and the best smoothing parameters were obtained for integer radii values
ranging from 4 pixels to 99 pixels, in steps of 1. The results are presented respectively in
Fig. 11(b) for 5.
In section III-B.2, for the optimum value derived for ff
5, the
optimum values derived for ff \Sigma1 and ff \Sigma2 were 1and 3respectively. These values are shown
with horizontal dashed lines in Fig. 11(b) and 11(c). The best smoothing parameters vary
with the values of R. However, when we compute their means for 4 - R - 99, the results
obtained are very close to the predicted values. Thus, for (compared
to 0.4); for (compared to 0.1429) and ff (compared to
0.2143).
Fig. 11(d) compares the r.m.s. values of the errors on the elevation of the tangents to
a circle without smoothing (solid line) to the best values possible after smoothing with
window sizes (dashed line) and (dotted line).
The noise reduction produced by smoothing is equal to 1:0 \Gamma (' 0
. For each
value of the radius, we have compared the noise reduction achieved using the optimum
parameters to that achieved with the constant values ff
5 , for 5. The results are presented in Fig. 12(a) and 12(b) respectively.
As can be seen, the constant values predicted by our simple model yield noise reduction
results which are very close to optimum.
Finally, for 4 - R - 99 and window sizes
the mean noise reduction of 3 methods as explained previously. The best smoothing
parameters derived from our simple noise model and the numerically determined best
fixed parameters are almost the same and their performance is nearly optimal.
C. Minimizing Error on Deviation Angles
In this section, we compare the deviation angles along the circumference of a circle to
the numerical estimates obtained for digital circles. The situation is illustrated in Fig.
13(a).
Consider 3 consecutive pixels P on the circumference of a digital circle.
The deviation angle at P i is denoted by OE i . Now the line segments from the center of the
circle to these 3 pixels (partly represented by dashed lines in the figure) have elevation
angles of ' respectively. The intersection of these line segments with the
circle are the true circle points Q elevations. Connecting these
November 20, 1997
Window Method ff \Sigma1 ff \Sigma2 Mean noise reduction
optimum variable 0.58084
best fixed 0.4026 0.57478
optimum variable variable 0.7658
14 0.7572
best fixed 0.1445 0.2088 0.7576
IV
Mean noise reduction for 4 - R - 99
points by line segments defines a deviation angle ffi i at Q i , for which OE i is a numerical
estimate.
The difference between ffi i and OE i is the error on the deviation angle at the point of
interest. Our goal in this section is to find the optimum parameters which will minimize
the r.m.s. value of this error, after smoothing.
In terms of the pixel coordinates deviation angle OE i is equal to
To compute ffi i , we first obtain the elevation angles as ' then the
coordinates of the circle points Q i as
Finally, ffi i is computed as in Eq. 37, using (~x; ~
y) instead of (x; y).
The best smoothing parameters were obtained for integer radii ranging from 4 pixels to
pixels, in steps of 1. The results are presented in Fig. 13(b) for
5.
In section III-C, for the optimum value derived for ff \Sigma1 was 0.2857; for 5, the
optimum values derived for ff \Sigma1 and ff \Sigma2 were 0.2381 and 0.1189 respectively. These values
are shown with dashed lines in Fig. 13(b) and 13(c). Again, the best smoothing parameters
vary with the values of R. However, when we compute their means for 4 - R - 99, the
November 20, 1997
results obtained are very close to the predicted values. Thus, for
Fig. 13(d) compares the r.m.s. values of the errors on the deviation angles without
smoothing (solid line) to the best values possible after smoothing with window sizes
(dashed line) and (dotted line).
The noise reduction produced by smoothing is equal to 1:0 \Gamma (ffi 0
. For each
value of the radius, we have compared the noise reduction achieved using the optimum
parameters to that achieved with the constant values ff
5. The results are presented in Fig. 14(a) and 14(b)
respectively.
As can be seen, the constant values predicted by our simple model yield noise reduction
results which are very close to optimum.
For 4 - R - 99 and window sizes compares the mean noise
reduction of 3 methods as explained previously. The best smoothing parameters derived
from our simple noise model and the numerically determined best fixed parameters are
even closer than in the 2 previous cases and their performance is very nearly optimal.
Window Method ff \Sigma1 ff \Sigma2 Mean noise reduction
optimum variable 0.770903
best fixed 0.2865 0.765732
optimum variable variable 0.910833
best fixed 0.2386 0.1190 0.906917
Mean noise reduction for 4 - R - 99
Finally, for one prefers to use ff 1(computationnally
convenient for deviation angle measurements), the mean error reduction level is 0.8832.
This is very good but not quite as effective as the optimum methods previously discussed.
A comparison of the error reduction with the best possible solution, for every value of R,
appears in Fig. 15.
V. Conclusion
This paper has presented a different avenue to solve the problem of optimum smoothing
of 2-D binary contours. Several approaches were reviewed with particular emphasis on
their theoretical merits and implementation difficulties. It was argued that most methods
are eventually implemented as a local weighted average with particular weight values.
Hence we adopted this scheme as the starting point of our investigation into optimum
methods.
Furthermore, there are many applications where smoothing is performed to improve the
precision of specific measurements to be computed from the contour points. In such cases,
the smoothing parameters should be chosen based on the nature of the computations
intended, instead of relying on a single, general 'optimality' criterion. Thus our work was
focused on optimum local weighted averaging methods tailored for specific computational
goals. In the present article, we have considered three such goals: obtaining reliable
estimates of point positions, of slopes, and of deviation angles along the contours.
To study the problem, a simple model was defined to represent 1-pixel random noise
along a straight horizontal border. Based on this simple model, an in-depth analytical
investigation of the problem was carried out, from which precise answers were derived for
the 3 chosen criteria.
Despite its simplicity, this model captures well the kind of perturbations which digitization
noise causes in the numerical estimation of various quantities along 2-D binary
contours even with arbitrary curvature. This was indeed verified, for window sizes of
and by finding the best smoothing parameters, using equivalent criteria, for digital
circles over a wide range of radii.
In this general case, the best smoothing parameters were found to vary according to
the length of the radius. Thus, in order to take full advantage of these optimum filters,
it would be necessary to compute local estimates of the radius of curvature for groups of
consecutive pixels along the contour, and then apply the best parameters found for these
radii. This would significantly reduce the efficiency of the smoothing operation. However,
it is not really necessary to go to that extent since the performance of these varying-weight
optimum filters can be very nearly approached by methods with a fixed set of parameters.
The latter were derived by numerical computation, for a wide range of radii. And it turned
out that their values were very close to those predicted using our simple digitization noise
model.
These numerical computations with varying radius of curvature validate our proposed
model and confer added confidence to the results obtained from it. Researchers requiring
simple and effective local weighted averaging filters before making numerical estimates of
specific quantities can thus rely on this model to derive optimum methods tailored to their
particular needs.
VI.
Acknowledgements
The authors would like to thank Dr. Louisa Lam for helpful comments and suggestions
made on an earlier draft of this paper.
This work was supported by the National Networks of Centres of Excellence research
program of Canada, as well as research grants awarded by the Natural Sciences and Engineering
Research Council of Canada and by an FCAR team research grant awarded by
the Ministry of Education of Quebec.
--R
"Digitized circular arcs: Characterization and parameter estimation,"
"Curve fitting approach for tangent angle and curvature measurements,"
"On the encoding of arbitrary geometric configurations,"
"Extraction of invariant picture sub-structures by computer,"
"Analysis of the digitized boundaries of planar objects,"
"On the digital computer classification of geometric line patterns,"
"Improved computer chromosome analysis incorporating preprocessing and boundary analysis,"
"Computer recognition of partial views of curved objects,"
"The medial axis of a coarse binary image using boundary smoothing,"
"Performances of polar coding for visual localisation of planar objects,"
"Curve smoothing for improved feature extraction from digitized pictures,"
"Measurements of the lengths of digitized curved lines,"
"Multiple resolution skeletons,"
"Non-parametric dominant point detection,"
"On the detection of dominant points on digital curves,"
"Fitting digital curve using circular arcs,"
"Scale-space filtering,"
"The structure of images,"
"The curvature primal sketch,"
"Scale-based description and recognition of planar curves and two-dimensional shapes,"
"Robust contour decomposition using a constant curvature criterion,"
"Adaptive smoothing: A general tool for early vision,"
"Pattern spectrum and multiscale shape representation,"
"A multiscale approach based upon morphological filtering,"
"Scale-space from nonlinear filters,"
"Smoothing by spline functions,"
"Spline functions and the problem of graduation,"
"A regularized solution to edge detection,"
"A computational approach to edge detection,"
"Uniqueness of the gaussian kernel for scale-space filtering,"
"Scaling theorems for zero crossings,"
"The renormalized curvature scale space and the evolution properties of planar curves,"
"Scaling theorems for zero-crossings,"
"Scaling theorems for zero crossings of bandlimited signals,"
"Scale-space for discrete signals,"
"An extended class of scale-invariant and recursive scale space filters,"
"Multiscale nonlinear decomposition: The sieve decomposition theorem,"
"Optimal estimation of contour properties by cross-validated regularization,"
"Optimal smoothing of digitized contours,"
"One-dimensional regularization with discontinuities,"
"Partial smoothing splines for noisy boundaries with corners,"
"Describing
"Filtering closed curves,"
"Organization of smooth image curves at multiple scales,"
"Local reproducible smoothing without shrinkage,"
"Iterative smoothed residuals: A low-pass filter for smoothing with controlled shrinkage,"
"Optimal L1 approximation of the gaussian kernel with application to scale-space construction,"
"Regularization of digitized plane curves for shape analysis and recognition,"
"Refining curvature feature extraction to improve handwriting recognition,"
"Multistage digital filtering utilizing several criteria,"
"Computer recognition of unconstrained hand-written numerals,"
"Speed, accuracy, flexibility trade-offs in on-line character recognition,"
John Wiley
"On the properties of discrete circles, rings, and disks,"
"Circle generators for display devices,"
"A note on the paper by
"Algorithms for generation of discrete circles, rings, and disks,"
--TR
--CTR
Ke Chen, Adaptive Smoothing via Contextual and Local Discontinuities, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.10, p.1552-1567, October 2005
Helena Cristina da Gama Leito , Jorge Stolfi, A Multiscale Method for the Reassembly of Two-Dimensional Fragmented Objects, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.9, p.1239-1251, September 2002 | optimal local weighted averaging;contour smoothing;gaussian smoothing;digitization noise modeling |
263443 | Bias in Robust Estimation Caused by Discontinuities and Multiple Structures. | AbstractWhen fitting models to data containing multiple structures, such as when fitting surface patches to data taken from a neighborhood that includes a range discontinuity, robust estimators must tolerate both gross outliers and pseudo outliers. Pseudo outliers are outliers to the structure of interest, but inliers to a different structure. They differ from gross outliers because of their coherence. Such data occurs frequently in computer vision problems, including motion estimation, model fitting, and range data analysis. The focus in this paper is the problem of fitting surfaces near discontinuities in range data.To characterize the performance of least median of the squares, least trimmed squares, M-estimators, Hough transforms, RANSAC, and MINPRAN on this type of data, the "pseudo outlier bias" metric is developed using techniques from the robust statistics literature, and it is used to study the error in robust fits caused by distributions modeling various types of discontinuities. The results show each robust estimator to be biased at small, but substantial, discontinuities. They also show the circumstances under which different estimators are most effective. Most importantly, the results imply present estimators should be used with care, and new estimators should be developed. | Introduction
Robust estimation techniques have been used with increasing frequency in computer vision
applications because they have proven effective in tolerating the gross errors (outliers) characteristic
of both sensors and low-level vision algorithms. Most often, robust estimators are
used when fitting model parameters - e.g. the coefficients of either a polynomial surface, an
affine motion model, a pose estimate, or a fundamental matrix - to a data set. For these
applications, robust estimators work reliably when the data contain measurements from a
single structure, such as a single surface, plus gross errors.
Sometimes, however, the data are more complicated than this, presenting a challenge to
robust estimators not anticipated in the robust statistics literature. This complication occurs
when the data are measurements from multiple structures while still being corrupted by
gross outliers. These structures may be different surfaces in depth measurements or multiple
moving objects in motion estimation. Here, the difficulty arises because robust estimators
are designed to extract a single fit. Thus, to estimate accurate parameters modeling one of
the structures - which one is not important - they must treat the points from all other
structures as outliers. After successfully estimating the fit parameters of one structure, the
robust estimator may be re-applied, if desired, to estimate subsequent fits after removing
the first fit's inliers from the data.
An example using synthetic range data illustrates the potential problems caused by multiple
structures. Figure 1 shows (non-robust) linear least-squares fits to data from a single
surface and to data from a pair of surfaces forming a step discontinuity. In the single surface
example, the least-squares fit is skewed slightly by the gross outliers, but the points from
the surface are still generally closer to the fit than the outliers. Thus, the fit estimated by
a robust version of least-squares will not be significantly corrupted by these outliers. In the
multiple surface example, the least-squares fit is skewed so much that it crosses (or "bridges")
the point sets from both surfaces, placing the fit in close proximity to both point sets. Since
robust estimators use fit proximity to distinguish inliers and outliers and downgrade the
influence of outliers, this raises two concerns about the accuracy of robust fits. First, an
estimator that iteratively refines an initial least squares fit will have a local, and potentially
a b
Figure
1: Examples demonstrating the effects of (a) gross outliers and (b) both gross outliers
and data from multiple structures on linear least-squares fits.
global, minimum fit that is not far from the initial, skewed fit. This is because points from
both surfaces will have both small and large residuals, making it difficult for the estimator to
"pull away" from one of the surfaces. Second, and more important, for the robust estimate
be the correct fit, thereby treating the points from one surface as inliers and points from the
other as outliers, the estimator's objective function must be lower for the smaller inlier set
of the correct fit than the larger inlier set of the bridging fit. By varying both the proximity
of the two surfaces and the relative sizes of their point sets, all robust estimators studied
here can be made to "fail" on this data, producing fits that are heavily skewed.
Motivated by the foregoing discussion, the goal of this paper is to study how effectively
robust estimators can estimate fit parameters given a mixture of data from multiple struc-
tures. Stating this "pseudo outliers problem" abstractly, to obtain an accurate fit a robust
technique must tolerate two different types of outliers: gross outliers and pseudo outliers.
Gross outliers are bad measurements, which may arise from specularities, boundary effects,
physical imperfections in sensors, or errors in low-level vision computations such as edge
detection or matching algorithms. Pseudo outliers are measurements from one or more additional
structures. (Without losing generality, inliers and pseudo outliers are distinguished by
assuming the inliers are points from the structure contributing the most points and pseudo
outliers are points from the other structures.) The coherence of pseudo outliers distinguishes
them from gross outliers. Because data from multiple structures are common in vision ap-
plications, robust estimators' performance on this type of data must be understood to use
them effectively. Where they prove ineffective, new and perhaps more complicated robust
techniques will be needed. 1
To study the pseudo outliers problem, this paper develops a measure of "pseudo outlier
bias" using tools from the robust statistics literature [10, pages 81-95] [12, page 11]. Pseudo
outlier bias will measure the distance between a robust estimator's fit to a ``target'' distribution
and its fit to an outlier corrupted distribution. The target distribution will model the
distribution of points drawn from a single structure without outliers, and the outlier corrupted
mixture distribution [27] will combine distributions modeling the different structures
and a gross outlier distribution. The optimal fit is found by applying the functional form of
an estimator to these distributions, rather than by applying the estimator's standard form
to particular sets of points generated from these distributions. This gives a theoretical mea-
sure, avoids the need for extensive simulations, and, most importantly, shows the inherent
limitations of robust estimators by studying their objective functions independent of their
search techniques. The bias of a number of estimators - M-estimators [12, Chapter 7], least
median of squares (LMS) [16, 21], least trimmed squares (LTS) [21], Hough transforms [13],
RANSAC [7], and MINPRAN [26] - will be studied as the target and mixture distributions
vary.
The application for studying the pseudo outliers problem is fitting surfaces to range data
taken from the neighborhood of a surface discontinuity. While this is a simple application for
studying the pseudo outliers problem, the problem certainly arises in other applications as
well - essentially any application where the data could contain multiple structures - and
the results obtained here should be used as qualitative predictions of potential difficulties in
these applications. In the context of the range data application, three idealized discontinuity
models are used to develop mixture distributions: step edges, crease edges and parallel
surfaces. Step edges model depth discontinuities, where points from the upper surface of
the step are pseudo outliers to the lower surface. Crease edges model surface orientation
discontinuities, where points from one side of the crease are pseudo outliers to the other.
versions of these techniques actually exist for fitting surfaces to range data. Their effectiveness,
however, depends in part on the accuracy of an initial set of robust fits.
Finally, parallel surfaces model transparent or semi-transparent surfaces, where a background
surface appears through breaks in the foreground surface, and data from the background are
pseudo outliers to the foreground.
A final introductory comment is important to assist in reading this paper. The paper defines
the notion of "pseudo outlier bias" using techniques common in mathematical statistics
but not in computer vision, most importantly, the "functional form" of a robust estimator.
The intuitive meaning of functional forms and their use in pseudo outlier bias are discussed
at the start of Section 4, which then proceeds with the main derivations. Readers uninterested
in the mathematical details should be able to skip Sections 4.2 through 4.6 and still
follow the analysis results.
Robust Estimators
This section defines the robust estimators studied. These definitions are converted to functional
forms suitable for analysis in Section 4. Because the goal of the paper is to expose
inherent limitations of robust estimators, the focus in defining the estimators is their objective
functions rather than their optimization techniques. Special cases of iterative optimization
techniques where local minima are potentially problematic will be discussed where
appropriate.
The data are (~x is an image coordinate vector - the independent vari-
able(s) - and z i is a range value - the dependent variable. Each fit is a function z = '(~x), often
restricted to the class of linear or quadratic polynomials. The notation -
'(~x) indicates the
fit that minimizes an estimator's objective function, with -
' called the "estimate". Each esti-
mator's objective function evaluates hypothesized fits, '(~x), via the residuals, r
2.1 M-Estimators
A regression M-estimate [12, Chapter 7] is
ae(r i;' =-oe); (1)
oe is an estimate of the true scale (noise) term, oe, and ae(u) is a robust "loss" function
which grows subquadratically for large juj to reduce the effect of outliers. (Often, as discussed
' and -
oe are estimated jointly.) M-estimators are categorized into three types [11] by
the behavior of one estimator of each type is studied. Monotone M-estimators
(Figure 2a), such as Huber's [12, Chapter 7], have non-decreasing, bounded /(u) functions.
Hard redescenders (Figure 2b), such as Hampel's [9] [10, page 150], force
hence c is a rejection point, beyond which a residual has no influence. Soft redescenders
(Figure 2c), such as the maximum likelihood estimator of Student's t-distribution [5], do not
have a finite rejection point, but force /(u) ! 0 as juj !1. The three robust loss functions
are shown in Figure 2 and in order they are
(2)
ae h
and
ae s
The ae functions' constants are usually set to optimize asymptotic efficiency relative to a
given target distribution [11] (e.g. Gaussian residuals).
M-estimators typically minimize
using iterative techniques [11] [12, Chapter
7]. The objective functions of hard and soft redescending M-estimators are non-convex and
may have multiple local minima.
In general, - oe must be estimated from the data. Hard-redescending M-estimators often
use the median absolute deviation (MAD) [11] computed from the residuals to an initial fit,
Monotone Hard Soft
ae(u)
(a) (b) (c)
Figure
2: ae(u) and /(u) functions for three M-estimators.
for consistency at the normal distribution and
at Student's t-distribution (when Other M-estimators jointly estimate -
oe and -
' as
';oe
ae(r
In particular, Huber [12, Chapter 7] uses
=oe) is from equation 2 and a is a tuning parameter; Mirza and Boyer [5] use
ae s (r
=oe) is from equation 4.
When fitting surfaces to range data, a different option for obtaining - oe is often used [3].
If oe depends only on the properties of the sensor then - oe may be estimated once and fixed for
all data sets. Theoretically, when - oe is fixed, the M-estimators described by equation 1 are
no longer true M-estimators since they are not scale equivariant [10, page 259]. To reflect
this, when - oe is fixed a priori, they are called "fixed-scale M-estimators." Both standard
M-estimators and fixed-scale M-estimators are studied here.
2.2 Fixed-Band Techniques: Hough Transforms and RANSAC
Hough transforms [13], RANSAC [4, 7], and Roth's primitive extractor [20] are examples of
"fixed-band" techniques [20]. For these techniques, - ' is the fit maximizing the number of
points within ' \Sigma r b , where r b is an inlier bound which generally depends on - oe (i.e. r
some constant c). Equivalently, viewing fixed-band techniques as minimizing the number of
outliers, they become a special case of fixed-scale M-estimators with a simple, discontinuous
loss function
ae f
Fixed-band techniques search for -
using either random sampling or voting techniques.
2.3 LMS and LTS
Least median of squares (LMS), introduced by Rousseeuw [21], finds the fit minimizing the
median of squared residuals. (See [16] for a review.) Specifically, the LMS estimate is
fmedian i
Most implementations of LMS use random sampling techniques to find an approximate
minimum.
Related to LMS and also introduced by Rousseeuw [21] is the least trimmed squares
estimator (LTS). The LTS estimate is
where the (r 2
are the (non-decreasing) ordered squared residuals of fit '. Usually
implementations also use random sampling.
2.4 MINPRAN
MINPRAN searches for the fit minimizing the probability that a fit and a collection of inliers
to the fit could be due to gross outliers [24, 26]. It is derived by assuming that relative to
any hypothesized fit '(x) the residuals of gross outliers are uniformly distributed 2 in the
range \SigmaZ 0 . Based on this assumption, the probability that a particular gross outlier could
be within '(~x i Furthermore, if all n points are gross outliers,
the probability k or more of them could be within '(~x) \Sigma r is
Given n data points containing an unknown number of gross outliers, MINPRAN evaluates
hypothesized fits '(~x) by finding the inlier bound, r, and the associated number of points
(inliers), k r;' , within \Sigmar of '(~x), minimizing the probability that the inliers could actually
be gross outliers. Thus MINPRAN's objective function in evaluating a particular fit is
min r
and MINPRAN's estimate is
[min r
MINPRAN is implemented using random sampling techniques (see [26]).
Modeling Discontinuities
The important first step in developing the pseudo outlier bias analysis technique is to model
the data taken from near a discontinuity as a probability distribution. Attention here is
restricted to discontinuities in one-dimensional structures, since this will be sufficient to
demonstrate the limitations of robust estimators.
3.1 Outlier Distributions
To set the context for developing the distributions modeling discontinuities, consider the one-
dimensional, outlier corrupted distributions used in the statistics literature to study robust
location estimators [10, page 97] [12, page 11]:
2 MINPRAN has been generalized to any known outlier distribution [26].
z
x
x d
Figure
3: Example data set for points near a step discontinuity.
Here, F 1 is an inlier distribution (also called a "target distribution"), such as a unit variance
Gaussian, and G is an outlier distribution, such as a large variance Gaussian or an uniform
distribution over a large interval. The parameter " is the outlier proportion. A set A of N
points sampled from this distribution will contain on average "N outliers. Robust location
estimators are analyzed using distribution F rather than using a series of point sets sampled
from F .
3.2 Mixture Distributions Modeling Discontinuities
The present paper analyzes robust regression estimators by examining their behavior on
distributions modeling discontinuities. These mixture distributions [27] will be of the form
will be inlier, pseudo outlier and gross outlier distributions, respectively, and
control the proportion of points drawn from the three distributions.
To and to set " s and " of data points taken
from the vicinity of a discontinuity. For example, S might be the points in Figure 3 whose
x coordinate falls in the interval [x modeled as a two-dimensional distribution
of points (x; z) with x values in an interval [x losing generality,
more points are from the left side of the discontinuity location than the right. (Using a two-dimensional
distribution could be counterintuitive since the x values, which may be thought
of as image positions at which depth measurements are made, are usually fixed.) Here, x is
treated as uniform in the interval [x modeling the uniform spacing of image positions. 3
The depth measurement for an inlier is z = fi 1 (x)+e, where e is independent noise controlled
by the Gaussian density g(e; oe 2 ) with mean 0 and variance oe 2 . fi 1 (x) models the ideal curve
from which the inliers are measured. The pseudo outlier distribution, H 2 , can be defined
similarly, with x values uniform in [x d
for both distributions H 1 and H 2 , the densities of x and z can be combined to give the joint
density
0; otherwise.
bound the uniform distribution on the x interval.
For the distribution of gross outliers in S, again x values are uniformly distributed,
but this time over the entire interval [x z values are governed by density g o (z),
which will be uniform over a large range. This gives the joint density for a gross outlier:
0; otherwise.
The mixture proportions " s and " in (14) are easily specified. " just the fraction
of gross outliers. " s is the "relative fraction" of inliers, i.e. the fraction points that are not
gross outliers and that are from the inlier side of the discontinuity. Assuming the density of
x values does not change across the discontinuity, " s is determined by x d :
Equivalently, inliers and pseudo outliers,
Notice that the "actual fraction" of inliers is " Depending
on which estimator is being analyzed, either the relative or the actual fraction or both will
be important.
3 For any point set sampled from this distribution, the x values will not be uniformly spaced, in general,
but their expected values are. This expected behavior is captured when using the distribution itself in the
analysis rather than points sets sampled from the distribution.
Using these mixture proportions, the above densities can be combined into a single,
mixed, two-dimensional density:
Observe that the "target density" is just h 1 (x; z) and the "target distribution" is H 1 (x; z).
The mixture distribution H(x; z) and the target distribution H 1 (x; z) can be calculated from
h(x; z) and h 1 (x; z) respectively.
Using mixture density h(x; z), data can be generated to form step edges and crease edges.
The appropriate model is determined by the two curve functions fi 1 and fi 2 . For example, a
step edge of height \Deltaz is modeled by setting fi 1
c. A crease edge is modeled when fi 1 and fi 2 are linear functions and
lines with overlapping x domains can be created by using fi 1 and fi 2 from step edges, but
setting x proportion of points
from the lower line. In this case, the mixture proportions are divorced from the location
of the discontinuity, which has no meaning. Thus, all three desired discontinuities can be
modeled.
4 Functional Forms and Mixture Models
To analyze estimators on distributions H, each estimator must be rewritten as a functional,
a mapping from the space of probability distributions to the space of possible estimates.
This section derives functional forms of the robust estimators defined in Section 2. It
starts, in Section 4.1 by giving intuitive insight. Then, Section 4.2 introduces functional
forms and empirical distributions on a technical level, using univariate least-squares location
estimates as an example. Next, Section 4.3 derives several important distributions needed in
the functionals. The remaining sections derive the required functionals. Readers uninterested
in the technical details should read only Section 4.1 and then skip ahead to Section 5.
4.1 Intuition
To illustrate what it means for a functional T to be applied to a distribution H, consider
least-squares regression. When applied to a set containing points
objective function is
i;' , which is proportional to the second moment
of the residuals conditioned on ', and the least squares estimate is the fit -
minimizing
this conditional second moment. A similar second moment, conditioned on ', may be calculated
for distribution H(x; z), and the fit -
' minimizing this conditional second moment
may be found. This is the least-squares regression functional. The functional form of an
M-estimator, by analogy, returns the fit minimizing a robust version of the second moment
of the conditional residual distribution calculated from H. Intuitions about the functional
forms of other estimators are similar.
The estimate T (H) can be used to represent or characterize the estimator's performance
on point sets sampled from H. Although the robust fit to any particular point set may differ
from T (H), if T (H) is skewed by the pseudo and gross outliers, then the fit to the point
set will likely be skewed as well. Indeed, when an estimator's minimization technique is an
iterative search, the skew may be worse than that of T (H) because it may stop at a local
minimum.
4.2 One-Dimensional Location Estimators
To introduce functional forms on a more technical level, this section examines the least-squares
location estimate for univariate data. For a finite sample fx g, the location
estimate is
'n
which is the sample mean or expected value. The functional form of this is the location
estimate of the distribution F from which the x i 's are drawn:
Z
Z
Z
the population mean or expected value.
The functional form of the location estimate is derived from the sample location estimate
by writing the latter in terms of the "empirical distribution" of the data, denoted by F n , and
then replacing F n with F , the actual distribution. The empirical density of fx is
where ffi(\Delta) is the Dirac delta function, and the empirical distribution is
where u(\Delta) is the unit step function. When the x i 's are independent and identically dis-
converges to F as n ! 1. The least squares location estimate is written in
terms of the empirical density by using the sifting property of the delta function [8, page 56]:
argmin
'n
'n
Z
Z
Z
Replacing f n with the population density yields the functional form of the
location estimate as desired (20).
4.3 Residual Distributions and Empirical Distributions
Before deriving functional forms for the robust regression estimators, the mixture distribution
H(x; z) must be rewritten in terms of the distribution of residuals relative to a hypothesized
fit, '. This is because the estimators' objective functions depend directly on residuals r
and only indirectly on points (x; z). In addition, several empirical versions of this "residual
distribution" are needed.
Two different residual distributions are required: one for signed residuals and one for
their absolute values. Let the distribution and density of signed residuals be F s (rj'; H) and
(including H in the notation to make explicit the dependence on the mixture
distribution). These are easily seen to be (Figure 4a)
F s
Z '(x)+r
a b
Figure
4: The cumulative distribution of residual r relative to fit '(x) is the integral of the
point densities, h 1 and h 2 , from the curves and from the gross outlier density, h over the
region bounded above by '(x) bounded on the sides by
(a) unbounded below for signed residuals or (b) bounded below by '(x) \Gamma r for absolute
residuals. Both figures show the region of integration for functions fi i and x boundaries
modeling a step edge.
and
Let the distribution and density of absolute residuals be F a (rj'; H) and f a (rj'; H), where
r - 0. These are (Figure 4b)
F a
Z '(x)+r
and
f a
Appendix
A evaluates these integrals. Replacing h with h 1 in the above equations yields the
residual distributions and densities for the target (inlier) distribution.
several empirical distributions are needed below. First, given n points
sampled from h(x; z), the empirical density of the data is simply
should not be confused with h i from equation 15). Next, the empirical density of the
signed residuals follows from h n (x; z) using the sifting property of the ffi function [8, page 56]:
\Gamma1n
=n
Finally, the empirical distribution of the absolute residuals is
F a
Z r
\Gammar
4.4 M-Estimators and Fixed-Band Techniques
The functionals for the robust regression estimators can now be derived, starting with that
of fixed-scale M-estimators. The first step is to write equation 1 in a slightly modified form,
which does not change the estimate:
'n
ae(r i;' =-oe);
Next, writing this in terms of the empirical distribution produces
argmin
'n
ae(r i;'
'n
'n
ZZ
ZZ
Replacing the empirical density h n (x; z) with the mixture density h(x; z) yields
T ae
ZZ
The change of variables simplifies things further,
T ae
Z
ae(r=-oe)
Z
Z
This is the fixed-scale M-estimator functional. Substituting equations 2, 3 and 4 gives
respectively for the M-estimators studied here.
For the M-estimators that jointly estimate -
' and - oe (see equations 7 and 8), the functional
is obtained by replacing ae(r=-oe) with ae(r; oe) in equation 27, producing
T ae;s
Z
Finally, recalling that fixed-band techniques are special cases of fixed-scale M-estimators,
their functional is obtained by substituting equation 9 into equation 27, yielding
T b
Observe that [1 \Gamma F a (r b j'; H)] is the expected fraction of outliers.
4.5 LMS and LTS
Deriving the functional equivalent to LMS requires first deriving the cumulative distribution
of the squared residuals and then writing the median in terms of the inverse of this
distribution. Defining the empirical distribution of squared residuals is
F n;y
since it is simply the percentage of points whose absolute residuals relative to fit ' are less
than
y. Now,
n;y (1=2j'; h); (30)
In other words, the median is the inverse of the cumulative, evaluated at 1=2. 4 This is the
standard functional form of the median [10, page 89]. Substituting equation
4 When LMS is implemented using random sampling where p points are chosen to instantiate a fit, the
median residual is taken from among the remaining points. To reflect this, the 1=2 in equation
could be replaced by (n
replacing the empirical distribution F n;y with F y
produces the LMS
y
Turning now to LTS, normalizing its objective function and writing it in terms of the
empirical density of residuals yieldsn
Z rm
n;y (1=2j'; H n ) is the empirical median square residual. The functional form of
LTS then is easily written as
T T
\GammaF
4.6 MINPRAN
MINPRAN's functional is derived by first re-writing MINPRAN's objective function, replacing
the binomial distribution with the incomplete beta function [19, page 229]:
min r
where
\Gamma(v)\Gamma(w)
Z pt
and \Gamma(\Delta) is the gamma function. This is done because I(v; w; p) only requires v; w 2
the binomial distribution requires integer values for k r;' and n. Now, since
F a
is the empirical distribution of the absolute residuals (see equation 26), k
objective function can be re-written equivalently
as
min r
I(n \Delta F a
Replacing F a
n by F a and substituting equation 13 gives the functional
min r
Observe that n, the number of points, is still required here, but TM (H) is considered a
functional [10, page 40].
5 Pseudo Outlier Bias
Now that the functional forms of the robust estimators have been derived, the pseudo outlier
bias metric can be defined. Given a particular mixture distribution H(x; z), target distribution
These fits are assumed to minimize the estimator's objective functional globally. Then,
pseudo outlier bias is defined as the normalized L 2 distance between the fits:
oe 1=2
As is easily shown, this metric is invariant to translation and independent scaling of both x
and z. (For fixed-scale M-estimators, - oe, which is provided a priori, must be scaled as well.
For MINPRAN, the outlier distribution must be scaled appropriately.)
When the set of the possible curves '(x) includes fi 1 (x), it can be shown that for each
of the functionals derived in Section 4, T
In other words, the estimator's
objective function is minimized by fi 1 . 5 When T the pseudo outlier bias metric
becomes
oe 1=2
Intuitively, pseudo outlier bias measures the L 2 norm distance between the two estimates,
T (H) and T (H 1 ), normalized by the length of the x interval over which H(x; z) is non-zero
and by the standard deviation of the noise in the z values. Since T (H 1 for the cases
studied here, a metric value of 0 implies that T is not at all corrupted by the presence of
either gross or pseudo outliers, and a metric value of 1 implies that on average over the x
domain T (H) is one standard deviation away from fi 1 .
5 In the analysis results given in Section 6, the set of curves will be linear functions of the form
will also be linear. These curves are continuous and have infinite extent in x, unlike
the densities modeling data drawn from them.
z
d
s
a a
z
d
Step Creaseb (x)
zx
Parallel
Figure
5: Parameters controlling the curve models for step edges, crease edges, and parallel
lines. In each case, fi 1 (x) is the desired correct fit and points from fi 2 (x) are pseudo outliers.
\Deltaz=oe is the scaled discontinuity magnitude, and " s controls the percentage of points from
6 Bias Caused by Surface Discontinuities
Pseudo outlier bias (or "bias" for short) can now be used to analyze robust estimators' accuracy
in fitting surfaces to data from three different types of discontinuities: step edges, crease
edges, and parallel lines with overlapping x domains. To do this, Section 6.1 parameterizes
the mixture density, outlines the technique to find T (H), and discusses the relationship between
results presented here and results for higher dimensions. Then, analysis results for
specific estimators are presented: fixed-scale M-estimators and fixed-band techniques (Sec-
tion 6.2) which require a prior estimate of -
oe, standard M-estimators (Section 6.3) which
estimate -
oe, and LMS, LTS and MINPRAN (Section 6.4) which are independent of -
oe. In
each case, the bias is examined as both the discontinuity magnitude and mixture of inliers,
pseudo outliers and gross outliers vary.
-5
-T
Figure
Surface plot of the objective functional of T ae h (H), i.e.
R
the hard redescending, fixed-scale M-estimator on a step edge with "
when fits have the form b. (The plot shows the negation of the objective
functional, so local minima in the functional appear as local maxima in the plot.) There
are three local optimal: one at the second at and the third at a
heavily biased fit, 1:91. The biased fit is the global optimum.
6.1 Discontinuity Models and Search
Figure
5 shows the models of step edges, crease edges, and parallel lines. The translation and
scale invariance of both the estimators and pseudo outlier bias, along with several realistic
assumptions, allow these discontinuities to be described with just a few parameters. (Refer
back to Section 3 for the exact parameter definitions.) For all models, and the x
interval is [0; 1]. For step edges, fi 1 retaining the oe parameter
to make clear the scale invariance - and x
these values, " To move from step to crease edges, only the curves fi 1 (x) and fi 2 (x)
must be changed. Referring to Figure 5b, these functions are
no explicit role because it is not scale invariant. For
parallel lines (Figure 5c), fi 1 (x) and fi 2 (x) are the same as for step edges, x
and the parameter x d plays no role. Finally, the outlier distribution g o (z) is
uniform for z within \Sigmaz 0 =2 of \Deltaz=2 and 0 otherwise.
The foregoing shows that the parameters " s , " \Deltaz=oe, and z 0 completely specify a two
surface discontinuity model, the resulting mixture density, h(x; z), and therefore, the distri-
bution, H(x; z). Hence, after specifying the class of functions (linear, here) for hypothesized
fits, a given robust estimator's pseudo outlier bias can be calculated as a function of these
parameters. This calculation requires an iterative, numerical search to minimize T (H), and
may require several starting points to avoid local minima. (See Figure 6 for an example plot
of T ae h
's objective functional.) Thus, for a particular type of discontinuity and for a particular
robust estimator, the parameters may be varied to study their effect on the estimator's
pseudo outlier bias, thereby characterizing how accurately the estimator can fit surfaces near
discontinuities.
As a final observation, although the results are presented for one-dimensional image
domains, they have immediate extension to two dimensions. For example, a two-dimensional
analog of the step edge presented here is fi 1 (x;
It is straightforward to show that this model
results in exactly the same pseudo outlier bias as a one-dimensional step model having
the same mixture parameters and gross outlier distribution. Similar results are obtained
for natural extensions of the crease edge and parallel lines models. Thus, one-dimensional
discontinuities are sufficient to establish limitations in the effectiveness of robust estimators.
6.2 Fixed-Scale M-Estimators and Fixed-Band Techniques
The first analysis results are for fixed-band techniques and fixed-scale M-estimators. These
techniques represent an ideal case where the noise parameter - oe = oe is known and fixed in
advance. Figure 7 shows the bias of fixed-band techniques (T F ) and three fixed-scale M-estimators
) as a function \Deltaz=oe when " 0:8. The
bias of the least-squares estimator, calculated by substituting
is included for comparison. The ae function tuning parameters values are directly from the
literature page 167], and
. Interestingly, the proportion of gross outliers, "
has no effect on the results. This is because the fraction of the outlier distribution within
r of a fit is the same for all fits ' and for all r except when '(x) \Sigma r is extreme enough to
cross outside the bounds of the gross outlier distribution.
The sharp drops in bias shown in Figure 7 (a) and (b) for fixed-band techniques and the
hard redescending M-estimator (and to some extent for the soft redescending M-estimator in
(b)) correspond to -
shifting from the local minimum associated with a heavily
biased fit to the local minimum near fi 1 (x), the optimum fit to the target distribution.
Plotting the step height at which this drop occurs as a function of " s gives a good summary
of these estimators' bias on step edges. Figure 8 does this, referring to this height as the
"small bias height" and quantifying it as the step height at which the bias drops below 1.0.
The plots in Figures 7 and 8(a) show that fixed-band techniques and fixed-scale M-estimators
are biased nearly as much as least-squares for significant step edge and crease
edge discontinuity magnitudes. The estimators fare much better on parallel lines (Figure 7(e)
and (f)); apparently, asymmetric positioning of pseudo outliers causes the most bias. To give
an intuitive feel for the significance of the bias, Figure 9 shows step edge data generated using
model parameters for which the robust estimators are strongly
biased.
Overall, the hard redescending, fixed-scale M-estimator is the least biased of the techniques
studied thus far. Compared to other fixed-scale M-estimators, its finite rejection
point - the point at which outliers no longer influence the fit - makes it less biased by
pseudo outliers than monotone and soft redescending fixed-scale M-estimators. On the other
hand, it is less biased than fixed-band techniques because it retains the statistical efficiency
of least-squares for small residuals.
The hard redescending, fixed-scale M-estimator can be made less biased by reducing
the values of its tuning parameters, as shown in Figure 8(b), effectively narrowing ae h and
reducing its finite rejection point. (The parameter set a = 1:0, 2:0 comes from
[2]; the set a = 1:0, chosen as an intermediate set of values.) Using
small parameter values has two disadvantages, however: the optimum statistical efficiency of
the standard parameters is lost, giving less accurate fits to the target distribution, and some
good data may be rejected as outliers. Despite these disadvantages, lower tuning parameters
should be used since avoiding heavily biased fits is the most important objective.
Finally, in practice, the non-convex objective functions of hard and soft redescending
Bias
Step Height
Least-squares
Fixed-band
Bias
Step Height
Least-squares
Monotone
Fixed-band
Hard
a b
Crease
Bias
Crease Height
Least-squares
Monotone
Fixed-band
Hard0.20.611.41.8
Bias
Crease Height
Least-squares
Monotone
Fixed-band
Hard
c d
Parallel
Bias
Relative Height
Least-squares
Monotone
Fixed-band
Bias
Relative Height
Least-squares
Monotone
Fixed-band
Hard
Figure
7: Bias of fixed-band techniques, fixed-scale M-estimators and least-squares on step
edges, (a) and (b), crease edges, (c) and (d), and parallel lines, (e) and (f), as a function
of height when " 0:8. The horizontal axis is the relative discontinuity
magnitude (height), \Deltaz=oe, and the vertical axis is the bias (see equation 35). Plots not
shown in (a) are essentially equivalent to the least-squares plots.
Small
Bias
Cut-off
Height
Fraction of Points on Lower Half of Step
Fixed-band
Small
Bias
Cut-off
Height
Fraction of Points on Lower Half of Step
1.31, 2.04, 4.0
1.0, 2.0, 3.0
1.0, 1.0, 2.0
a b
Figure
8: Small bias cut-off heights as a function of " s , the relative fraction of points on
the lower half of the step. Plots in (a) show the heights for fixed-band techniques and two
fixed-scale M-estimators. Plots in (b) show the heights for different tuning parameters of
the hard redescending fixed-scale M-estimator. Heights not plotted for small " s are above
When height is not plotted for large " s , bias is never greater than 1.0.
fixed-scale M-estimators can lead to more biased results than indicated here. Iterative search
techniques, especially when started from a non-robust fit, may stop at a local minimum
corresponding to a biased fit when the fit to the target distribution is the global minimum
of the objective function. Therefore, to avoid local minima, fixed-scale M-estimators should
use either a random sampling search technique or a Hough transform.
6.3 M-Estimators
Next, consider standard M-estimators, which estimate - oe from the data. To calculate T (H)
for the monotone and soft redescending M-estimators, simply calculate -
any mixture distribution using equation 7 or 8 as the objective functional. For the hard
redescending M-estimator, which estimates -
oe from an initial fit, the optimum fit to the
mixture distribution is found in three stages: first find the optimum LMS fit, then calculate
the median absolute deviation (MAD) [10, page 107] to this fit, scaling it to estimate -
oe,
and finally calculate -
oe fixed. Two different scale factors for estimating -
oe
are considered: the first, 1:4826, ensures consistency at the normal distribution; the second,
1:14601, ensures consistency at Student's t-distribution (with 1:5). Using the latter
allows accurate comparison between the hard and soft redescending M-estimators since the
Figure
9: Example step edge data generated when "
where each the objective function of each robust estimator (except LTS) is minimized by a
biased fit. The example fit shown is -
'(x) for the hard redescending, fixed-scale M-estimator.
latter is the maximum likelihood estimate for Student's t distribution [5].
Figure
shows bias plots for the soft redescending M-estimator and for the hard re-
descending M-estimator using the two different scale factors (plot "Hard-N" for the normal
distribution and plot "Hard-t" for the t-distribution). Results for the monotone M-estimator
are not shown since its bias matches that of least-squares almost exactly. Overall, the results
are substantially worse than for fixed-scale M-estimators, especially for " This is a
direct result of -
oe being a substantial over-estimate of oe: for example, when "
oe=oe - 2:4 for all estimates. (See [22] for analysis of bias in estimating -
oe.)
These over-estimates allow a large portion of the residual distribution to fall in the region
where ae is quadratic, causing the estimator to act more like least-squares. Because of this,
M-estimators are heavily biased by discontinuities when they must estimate - oe from the data.
6.4 LMS, LTS and MINPRAN
The last estimators examined are LMS, LTS, and MINPRAN, methods which neither require
oe a priori nor need to estimate it while finding -
'(x).
Figure
shows bias plots for these
estimators on step edges, crease edges and parallel lines, using "
Figure
shows small bias cut-off heights on step edges for LMS, LTS and MINPRAN, and
it demonstrates the effects of changes in the mixture proportions on LMS and LTS.
LMS and LTS work as well as any technique studied as long as the actual of fraction
Bias
Step Height
Least-squares
Hard-N
Bias
Step Height
Least-squares
Hard-N
Hard-t
a b
Crease
Bias
Crease Height
Least-squares
Hard-N
Hard-t0.20.611.41.8
Bias
Crease Height
Least-squares
Hard-N
Hard-t
c d
Parallel
Bias
Relative Height
Least-squares
Hard-N
Bias
Relative Height
Least-squares
Hard-N
Hard-t
Figure
10: Bias of M-estimators and least-squares on step edges, (a) and (b), crease edges,
Bias
Step Height
Least-squares
MINPRAN
LMS
Bias
Step Height
Least-squares
MINPRAN
LMS
a b
Crease
Bias
Crease Height
Least-squares
MINPRAN
LMS
Bias
Crease Height
Least-squares
MINPRAN
LMS
c d
Parallel
Bias
Relative Height
Least-squares
MINPRAN
LMS
Bias
Relative Height
Least-squares
MINPRAN
LMS
Figure
11: Bias of MINPRAN, LMS, LTS and least-squares on step edges, (a) and (b), crease
Small
Bias
Cut-off
Height
Relative Fraction from Lower Half of Step
MINPRAN
LMS
Small
Bias
Cut-off
Height
Relative Fraction from Lower Half of Step0.1a b261014
Small
Bias
Cut-off
Height
Actual Fraction from Lower Half of Step0.1048120.5 0.55 0.6 0.65 0.7 0.75
Small
Bias
Cut-off
Height
Actual Fraction from Lower Half of Step0.1c d
Figure
12: Small bias cut-off heights. Plot (a) shows these for LMS, LTS, MINPRAN, and
the modified MINPRAN optimization criteria (MINPRAN2) as a function of " s , the relative
fraction of inliers. Plot (b) shows these for LTS as a function of " s for different gross outlier
percentages " Plots (c) and (d) show these for LTS and LMS respectively as a function of
, the actual fraction of inliers. Heights not plotted for small " s or are
above When height is not plotted for large " s or never greater
than oe.
inliers - data from fi 1 (x) - is above 0.5. Since this fraction is the bias of LMS and
LTS, unlike that of M-estimators, depends heavily on both " sampling
implementations of LMS and LTS, where p points instantiate a hypothesized fit and the
objective function is evaluated on the remaining points, the bias curves in Figure 11
and the steep drop in cut-off heights in Figure 12 will shift to the right, but only marginally
since usually n AE p.) Figures 12b and c demonstrate this dependence in two ways for LTS.
Figure
shows small bias cutoffs as a function of " s , the relative fraction of inliers -
points on the lower half of the step. The bias cutoffs are lower for lower " simply because
fewer gross outliers imply more actual inliers when " s remains fixed. Figure 12c shows
small bias cutoffs as a function of the actual fraction of inliers. In this context, varying "
while fixed changes the fraction of gross outliers versus pseudo outliers. As the
plot shows, the coherent structure of the pseudo outliers causes more bias than the random
structure of gross outliers. This same effect is shown for LMS in Figure 12d. Finally, the
magnitude of z 0 , which controls the gross outlier distribution, has little effect on the bias
results, except in the unrealistic case where it approaches the discontinuity magnitude.
LTS is less biased than LMS, especially when the actual fraction of inliers is only slightly
above 0.5. This can be seen most easily by comparing the low bias cutoff plots in Figure 12c
and d. Like the advantage of hard redescending M-estimators over fixed-band techniques
(Section 6.2), this occurs because LTS is more statistically efficient than LMS [21] - its
objective function depends on the smallest 50% of the residuals rather than just on the
median residual. It is important to note that although LMS's efficiency can be improved by
application of a one-step M-estimator starting from the LMS estimate, this will not improve
substantially a heavily biased fit, since a local minimum of the M-estimator objective function
will be near this fit.
With a minor modification to its optimization criteria, MINPRAN can be made much
less sensitive to pseudo outliers, improving dramatically on the poor performance shown in
Figures
11 and 12. The idea is to find two disjoint fits (no shared inliers), -
' a and -
inlier bounds -
r a and - r b and inlier counts k - 'a ;-r a
and k -
, minimizing F(-r a +-r b ; k - ' 1 ;-r a
[23, 26]. If -
' is the single fit minimizing the criterion function, with inlier bound - r and inlier
count k -r , then the two fits -
' a and -
' b are chosen instead of the single fit -
'a ;-r a
Thus, the modified optimization criteria tests whether one or two inlier distributions are
more likely in the data [27]. Figure 12 shows the step edge small bias cut-off heights for this
new objective function, denoted by MINPRAN2. These are substantially lower than those of
the other techniques, including LTS. Further, these results, unlike those of MINPRAN, are
only marginally affected by the parameters " . Unfortunately, the search for -
' a and
' b is computationally expensive, and so the present implementation of MINPRAN2 uses a
simple search heuristic that yields [23, 26] more biased results than the optimum shown here.
It is, however, as effective as the fixed-scale, hard redescending M-estimator and, unlike LMS
and LTS, it does not fail dramatically when there are too few inliers.
6.5 Discussion and Recommendations
Overall, the results show that all the robust estimators studied estimate biased fits at small
but substantial discontinuity magnitudes. This bias, which relative to the bias of least-squares
is greater for crease and step edges and less for parallel lines, occurs even if -
oe or the
distribution of gross outliers or both are known a priori. Further, it must be emphasized
that this bias is not an artifact of the search process: the functional form of each estimator
returns the fit corresponding to the global minimum of the estimator's objective function.
The reason for the bias can be seen by examining the cumulative distribution functions
(cdfs) of absolute residuals. Figure 13 plots this cdf, F a (rj'; H), when ' is the target fit
is the least-squares fit to H, for H modeling crease and step discontinuities.
For the cdf of the biased fit is almost always greater that of the target fit,
meaning that in a discrete set of samples, the biased fit, which crosses through both point
sets, will on average yield smaller magnitude residuals than the target fit, which is close to
only the target point set. (The situation is somewhat better when \Deltaz=oe = 9:0.) Therefore,
robust estimators, such as the ones studied, whose objective functions are based solely on
residuals, are unlikely to estimate unbiased fits at small magnitude discontinuities.
CDF
Absolute Residual
Biased Fit
Target Fit0.20.61
CDF
Absolute Residual
Biased Fit
Target Fit
a b
Crease
CDF
Absolute Residual
Biased Fit
Target Fit0.20.61
CDF
Absolute Residual
Biased Fit
Target Fit
c d
Figure
13: Each figure plots the cumulative distribution functions (cdf) of absolute residuals
for the target fit and for a biased (least-squares) fit: (a) and (b) are relative to a step
discontinuity, and (c) and (d) are relative to a crease discontinuity. For all plots, the mixture
fractions are fixed at " robust estimators are substantially biased
at both step and crease discontinuities.
While none of the estimators works as well as desired, the following recommendations for
choosing among them are based on the results presented above:
oe is known a priori , one should use a hard redescending M-estimator objective
function such as Hampel's with reduced tuning parameter values and either a
random-sampling search technique or a weighted Hough transform. To ensure all inliers
are found and to obtain greater statistical efficiency, an one-step M-estimator with
larger tuning parameters should be run from the initial optimum fit. This technique is
preferable to LTS and LMS because it is less sensitive to the number of gross outliers.
oe is not known a priori, but the distribution of gross outliers is known, one
should use the modified MINPRAN algorithm, MINPRAN2 [23, 26].
ffl When neither - oe nor the distribution of gross outliers is known, LTS should be used,
although its performance degrades quickly when there are too few inliers. LTS is
preferable to LMS because of its statistical efficiency.
7 Summary and Conclusions
This paper has developed the pseudo outlier bias metric using techniques from mathematical
statistics to study the fitting accuracy of robust estimators on data taken from multiple
structures - surface discontinuities, in particular. Pseudo outlier bias measures the distance
between a robust estimator's optimum fit to a target distribution and its optimum fit to an
outlier corrupted mixture distribution. Here, the target distribution models the points from
a single surface and the mixture distribution models points from multiple surfaces plus
gross outliers. Each estimator's optimum fit is found by applying its functional form to
one of these model distributions. Thus, like other analysis tools from the robust statistics
literature, pseudo outlier bias depends on point distributions rather than on particular point
sets drawn from these distributions. While this has some limitations - the actual fitting
error for particular points sets may be more or less than the pseudo outlier bias and it ignores
problems that may arise from multiple local minima in an objective function - it represents
a simple, efficient, and elegant method of analyzing robust estimators.
Pseudo outlier bias was used to analyze the performance of M-estimators, fixed-band
techniques (Hough transforms and RANSAC), least median of squares (LMS), least trimmed
squares (LTS) and MINPRAN in fitting surfaces to three different discontinuity models: step
edges, crease edges and parallel lines. For each of these discontinuities, two surfaces generate
data, with the larger set of surface data forming the inliers and the smaller set forming
the pseudo outliers. By characterizing these discontinuity models using a small number of
parameters, formulating the models as mixture distributions, and studying the bias of the
robust estimators as the parameters varied, it was shown that each robust estimator is biased
for substantial discontinuity magnitudes. This effect, which relative to that of least-squares
is strongest for step edges and crease edges, persists even when the noise in the data or the
gross outlier distribution or both are known in advance. It is disappointing because in vision
data - not just in range data - multiple structures (pseudo outliers) are more prevalent
than gross outliers. In spite of the disappointment, however, specific recommendations,
which depend on what is known about the data, were made for choosing between current
techniques. 6
These negative results indicate that care should be used when robustly estimating surface
parameters in range data, either to obtain local low-order surface approximations or to
initialize fits for surface growing algorithms [3, 5, 6, 15]. (Similar problems may occur
for the "layers" techniques that have been applied to motion analysis [1, 6, 28].) Robust
estimates will be accurate for large scale depth discontinuties and sharp corners, but will
be skewed at small magnitude discontinuites, such as near the boundary of a slightly raised
or depressed area of a surface. Obtaining accurate estimates near these discontinuities will
require new and perhaps more sophisticated robust estimators.
Acknowledgements
The author would like to acknowledge the financial support of the National Science Foundation
under grants IRI-9217195 and IRI-9408700, the assistance of James Miller in various
aspects of this work, and the insight offered by the anonymous reviewers which led to sub-
6 See [14, 17] for new, related techniques.
stantial improvements in the presentation.
Appendix
A: Evaluating F s
This appendix shows how to evaluate the conditional cumulative distribution and conditional
density of signed residuals, F s 22). The
distribution and density of the absolute residuals are obtained easily from these.
Expanding the expression in equation 21 for F s (rj'; H), using equation h, gives
F s
Here,
Z '(x)+r
is the cumulative distribution of the gross outliers, and for
Z '(x)+r
To simplify evaluating F s
variables and then change the order of integration.
Starting with the change of variables, make the substitutions
(intuitively, v is the fit residual at x), define
Then, the integral becomes
Z OE(x)+r
Since the integrand is now independent of x, rewriting the integral to integrate over strips
parallel to the x axis will produce a single integral. Consider a strip bounded by v and v+ \Deltav
Figure
14). The integral over this strip is approximately - i g(v)w(v)\Deltav, where w(v) is the
width of the integration region at v. In the limit as \Deltav ! 0, this becomes exact and the
integral over the entire region becomes
is the maximum of OE(x)
r
f
x
Figure
14: Calculating F s
requires integrating the point density for curve
over strips of width \Deltav parallel to the x axis. The density g(v; oe 2 ) is constant over these
strips.
Evaluating w(v) depends on OE(x). This paper studies linear fits and linear curve models,
so OE(x) is linear. In this case, let
and Figure 14). Then,
using G to denote the cdf of the gaussian,
A similar result is obtained when m ! 0, and when
To compute the density f s (rj'; H), start from the mixture density in equation 22 and
integrate each component density separately. This is straightforward when the density g o is
uniform and, as above, '(x) and fi i (x) are linear.
--R
Layered representation of motion video using robust maximum likelihood estimation of mixture models and MDL encoding.
Robust window operators.
Segmentation through variable-order surface fitting
A Ransac-based approach to model fitting and its application to finding cylinders in range data
The Robust Sequential Estimator: A general approach and its application to surface organization in range data.
Cooperative robust estimation using layers of support.
Random Sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography.
Linear Systems
The change-of-variance curve and optimal redescending M-estimators
Robust Statistics: The Approach Based on Influence Functions.
Robust regression using iteratively reweighted least- squares
Robust Statistics.
A survey of the Hough transform.
Robust adaptive segmentation of range images.
Segmentation of range images as the search for geometric parametric models.
Robust regression methods for computer vision: A review.
MUSE: Robust surface fitting using unbiased scale esti- mates
Performance evaluation of a class of M-estimators for surface parameter estimation in noisy range data
Numerical Recipes in C: The Art of Scientific Computing.
Extracting geometric primitives.
Least median of squares regression.
Alternatives to the median absolute deviation.
A new robust operator for computer vision: Application to range images.
A new robust operator for computer vision: Theoretical analysis.
Expected performance of robust estimators near discontinuities.
MINPRAN: A new robust estimator for computer vision.
Statistical Analysis of Finite Mixture Distri- butions
Layered representation for motion analysis.
--TR
--CTR
Cesare Alippi, Randomized Algorithms: A System-Level, Poly-Time Analysis of Robust Computation, IEEE Transactions on Computers, v.51 n.7, p.740-749, July 2002
Alireza Bab-Hadiashar , David Suter, Robust segmentation of visual data using ranked unbiased scale estimate, Robotica, v.17 n.6, p.649-660, November 1999
Ulrich Hillenbrand, Consistent parameter clustering: Definition and analysis, Pattern Recognition Letters, v.28 n.9, p.1112-1122, July, 2007
Klaus Kster , Michael Spann, MIR: An Approach to Robust Clustering-Application to Range Image Segmentation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.5, p.430-444, May 2000
Thomas Kmpke , Matthias Strobel, Polygonal Model Fitting, Journal of Intelligent and Robotic Systems, v.30 n.3, p.279-310, March 2001
Christine H. Mller , Tim Garlipp, Simple consistent cluster methods based on redescending M-estimators with an application to edge identification in images, Journal of Multivariate Analysis, v.92 n.2, p.359-385, February 2005
Hanzi Wang , David Suter, Robust Adaptive-Scale Parametric Model Estimation for Computer Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.11, p.1459-1474, November 2004
Hanzi Wang , David Suter, MDPE: A Very Robust Estimator for Model Fitting and Range Image Segmentation, International Journal of Computer Vision, v.59 n.2, p.139-166, September 2004
Philip H. S. Torr , Colin Davidson, IMPSAC: Synthesis of Importance Sampling and Random Sample Consensus, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.3, p.354-364, March
T. Vieville , D. Lingrand , F. Gaspard, Implementing a Multi-Model Estimation Method, International Journal of Computer Vision, v.44 n.1, p.41-64, August 2001
P. H. S. Torr, Bayesian Model Estimation and Selection for Epipolar Geometry and Generic Manifold Fitting, International Journal of Computer Vision, v.50 n.1, p.35-61, October 2002
Myron Z. Brown , Darius Burschka , Gregory D. Hager, Advances in Computational Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.8, p.993-1008, August | parameter estimation;multiple structures;outliers;discontinuities;robust estimation |
263444 | Affine Structure from Line Correspondences With Uncalibrated Affine Cameras. | AbstractThis paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a one-dimensional projective camera. This converts 3D affine reconstruction of "line directions" into 2D projective reconstruction of "points." In addition, a line-based factorization method is also proposed to handle redundant views. Experimental results both on simulated and real image sequences validate the robustness and the accuracy of the algorithm. | Introduction
Using line segments instead of points as features has attracted
the attention of many researchers [1], [2], [3], [4],
[5], [6], [7], [8], [9] for various tasks such as pose estima-
tion, stereo and structure from motion. In this paper, we
are interested in structure from motion using line correspondences
across mutiple images. Line-based algorithms
are generally more difficult than point-based ones for the
following two reasons. The parameter space of lines is non
linear, though lines themselves are linear subspaces, and a
line-to-line correspondence contains less information than
a point-to-point one as it provides only one component of
the image plane displacement instead of two for a point
correspondence. A minimum of three views is essential for
line correspondences, whereas two views suffice for point
ones. In the case of calibrated perspective cameras, the
main results on structure from line correspondences were
established in [4], [10], [5]: With at least six line correspondences
over three views, nonlinear algorithms are possible.
With at least thirteen lines over three views, a linear algorithm
is possible. The basic idea of the thirteen-line linear
algorithm is similar to the "eight-point" one [11] in that
it is based on the introduction of a redundant set of intermediate
parameters. This significant over-parametrization
of the problem leads to the instability of the algorithm reported
in [4]. The thirteen-line algorithm was extended to
uncalibrated camera case in [12], [9]. The situation here
might be expected to be better, as more free parameters
are introduced. However, the 27 tensor components that
are introduced as intermediate parameters are still subject
to 8 complicated algebraic constraints. The algorithm
Long QUAN is with CNRS-GRAVIR-INRIA, ZIRST 655, avenue de
l'Europe, 38330 Montbonnot, France. E-mail: Long.Quan@imag.fr
Takeo KANADE is with The Robotics Institute, Carnegie Mellon
University, Pittsburgh, PA 15213, U.S.A. E-mail: tk@cs.cmu.edu
can hardly be stable. A subsequent nonlinear optimization
step is almost unavoidable to refine the solution [5], [4],
In parallel, there has been a lot of work [13], [14], [15], [16],
[17], [18], [19], [20], [21], [22], [14], [16], [23], [17], [24], [25]
on structure from motion with simplified camera models
varing from orthographic projections via weak and paraperspective
to affine cameras, almost exclusively for point
features. These simplified camera models provide a good
approximation to perpsective projection when the width
and depth of the object are small compared to the viewing
distance. More importantly, they expose the ambiguities
that arise when perspective effects diminish. In such cases,
it is not only easier to use these simplified models but also
advisable to do so, as by explicitly eliminating the ambiguities
from the algorithm, one avoids computing parameters
that are inherently ill-conditioned. Another important advantage
of working with uncalibrated affine cameras is that
the reconstruction is affine, rather than projective as with
uncalibrated projective cameras.
Motivated on the one hand by the lack of satisfactory line-based
algorithms for projective cameras and on the other
by the fact that the affine camera is a good model for many
practical cases, we investigate the properties of projection
of lines by affine cameras and propose a linear algorithm for
affine structure from line correspondences. The key idea is
the introduction of a one-dimensional projective camera.
This converts the 3D affine reconstruction of "line direc-
tions" into 2D projective reconstruction of "points". The
linear algorithm requires a minimum of seven lines over
three images. We also prove that seven lines over three images
is the strict minimum data needed for affine structure
from uncalibrated affine cameras and that there are always
two possible solutions. This result extends the previous results
of Koenderink and Van Doorn [14] for affine structure
with a minimum of two views and five points. To deal with
redundant views, we also present a line-based factorisation
algorithm which extends the previous point-based factorisation
methods [18], [21], [22]. A preliminary version of
this work was presented in [26].
The paper is organized as follows. In Section II, the affine
camera model is briefly reviewed. Then, we investigate
the properties of projection of lines with the affine camera
and introduce the one-dimensional projective camera
in Section III. Section IV is focused on the study of the un-calibrated
one-dimensional camera, and in this section we
present also a linear algorithm for 2D projective reconstruction
which is equivalent to the 3D affine reconstruction of
IEEE-PAMI, VOL. *, NO. *, 199*
line directions. Later, the linear estimation of the translational
component of the uncalibrated affine camera is given
in Section V and the affine shape recovery is described in
Section VI. To handle redundant views, a line-based factorisation
method is proposed in Section IX. The passage
to metric structure from the affine structure using known
camera parameters will be described in Section XI. Finally
in Section XIII, discussions and some concluding remarks
are given.
Throughout the paper, tensors and matrices are denoted
in upper case boldface, vectors in lower case boldface and
scalars in either plain letters or lower case Greek.
II. Review of the affine camera model
For a projective (pin-hole) camera, the projection of a point
of P 3 to a point
be described by a 3 \Theta 4 homogeneous projection matrix
For a restricted class of camera models, by setting the third
row of the perspective camera P 3\Theta4 to (0; 0; 0; ), we obtain
the affine camera initially introduced by Mundy and
Zisserman [27],
The affine camera A 3\Theta4 encompasses the uncalibrated
versions of the orthographic, weak perspective and paraperspective
projection models. These reduced camera
models provide a good approximation to the perspective
projection model when the depth of the object is small
compared to the viewing distance. For more detailed relations
and applications, one can refer to [20], [22], [28], [29],
[13].
For points in the affine spaces IR 3 and IR 2 , they are naturally
embedded into by the mappings w a 7!
We have thus
. If we
further use relative coordinates of the points with respect
to a given reference point (for instance, the centroid of the
set of points), the vector t 0 is cancelled and we obtain the
following linear mapping between space points and image
points:
\Deltaw
This is the basic equation of the affine camera for points.
III. The affine camera for lines
Now consider a line in IR 3 through a point x 0 , with direction
The affine camera A 3\Theta4 projects this to an image line
A 3\Theta4
passing through the image point
with direction
This equation describes a linear mapping between direction
vectors of 3D lines and those of 2D lines, and reflects
a key property of the affine camera: lines parallel in 3D
remain parallel in the image. It can be derived even more
directly using projective geometry by considering that the
line direction d x is the point at infinity
the projective line in P 3 and the line direction dw is the
point at infinity
of the projective line in
Equation (4) immediately follows as the affine camera
preserves the points at infinity by its very definition.
Comparing Equation (4) with Equation (1)-a projection
from P 3 to P 2 , we see that Equation (4) is nothing but a
projective projection from P 2 to P 1 if we consider the 3D
and 2D "line directions" as 2D and 1D projective "points".
This key observation allows us to establish the following.
The affine reconstruction of line directions with a two-dimensional
affine camera is equivalent to the projective
reconstruction of points with a one-dimensional projective
camera.
One of the major remaining efforts will be concerned with
projective reconstruction from the points in P 1 . There
have been many recent works [30], [31], [32], [33], [34], [35],
[36], [37], [38], [39], [40], [41], [10], [42], [43] on projective
reconstruction and the geometry of multi-views of two dimensional
uncalibrated projective cameras. Particularly,
the tensorial formalism developed by Triggs [36] is very interesting
and powerful. We now extend this study to the
case of the one-dimensional camera. It turns out that there
are some nice properties which were absent in the 2D case.
IV. Uncalibrated one-dimensional camera
A. Trilinear tensor of the three views
First, rewrite Equation (4) in the following form:
in which we use
of dw and d x to stress that we are dealing with "points"
QUAN: AFFINE STRUCTURE FROM LINE CORRESPONDENCES 3
in the projective spaces P 2 and P 1 rather than "line direc-
tions" in the vector spaces IR 3 and IR 2 .
We now examine the matching constraints between multiple
views of the same point. Since two viewing lines in the
projective plane always intersect in a point, no constraint is
possible for less than three views. There is one constraint
only for the case of 3 views. Let the three views of the
same point x be given as follows:
These can be rewritten in matrix form as@ M u
x
which is the basic reconstruction equation for a one-dimensional
camera. The vector
be zero, so fi fi fi fi fi fi
The expansion of this determinant produces a trilinear constraint
of three viewsX
or in short
homogeneous tensor
whose components T ijk are 3 \Theta 3 minors of the following
6 \Theta 3 joint projection matrix:@ M
The components of the tensor can be made explicit as
2: (11)
where the bracket [ij 0 k 00 ] denotes the 3 \Theta 3 minor of i-
row vector of the above joint projection
matrix and bar " " in i, j and
k denotes the dualization
It can easily be seen that any constraint obtained by
adding further views reduces to a trilinearity. This proves
the uniqueness of the trilinear constraint. Moreover, the
homogeneous tensor T 2\Theta2\Theta2 has
d.o.f., so it is a minimal parametrization of three views
since three views have exactly
3 \Theta (2 \Theta 3 \Gamma
d.o.f. up to a projective transformation in P 2 .
Each point correspondence over three views gives one linear
constraint on the tensor components T ijk . We can establish
the following.
The tensor components T ijk can be estimated linearly with
at least 7 points in P 1 .
At this point, we have obtained a remarkable result that for
a one-dimensional projective camera, the trilinear tensor
encapsulates exactly the information needed for projective
reconstruction in P 2 . Namely, it is the unique matching
constraint, it minimally parametrizes the three views and
it can be estimated linearly. Contrast this to the 2D projective
camera case in which the multilinear constraints are
algebraically redundant and the linear estimation is only an
approximation based on over-parametrization.
B. Retrieving normal forms for projection matrices
The geometry of the three views is most conveniently, and
completely represented by the projection matrices associated
with each view. In the previous section, the trilinear
tensor was expressed in terms of the projection matrices.
Now we seek a map from the trilinear tensor representation
back to the projection matrix representation of the three
views.
Without loss of generality, we can always take the following
normal forms for the three projection matrices
Actually, the set of projection matrices fM;M
parametrized this way has more than the
minimum of 7. Further constraints can be imposed. We
can observe that any projective transformation in P 2 of the
I 2\Theta2 0
for an arbitrary 2-vector v leaves M invariant and transforms
~
A c
As c cannot be a zero vector, it can be normalized such
that c T c = 1. If we further choose an arbitrary vector v
to be \GammaA T c, then ~
A. It can now be easily
verified that ~
This amounts to saying that ~
A in
4 IEEE-PAMI, VOL. *, NO. *, 199*
~
M 0 can be taken to be a rank 1 matrix up to a projective
transformation, i.e.
~
a 1 aea 1
a
for a non-zero scalar ae. The 2-vector c is then (\Gammaa 2 ; a 1 ) T .
Hence M 0 can be represented as
a
by two parameters, the ratio a 1 : a 2 and ae. Therefore, a
minimal 7 parameter representation for the set of projection
matrices has been obtained.
With the projection matrices given by (13), the trilinear
tensor (T ijk ) defined by (11) becomes
represents the dulization (12).
If we consider the tensor (T ijk ) as an 8-vector
the eight homogeneous equations of (15) can be rearranged
into 7 non-homogeneous ones by taking the ratios t l : t 8 for
7. By separating the entries of M 0 from those of
G 7\Theta6@ d
e
where the matrix G 7\Theta6 is given
Since the parameter vector (d; cannot be zero,
the 7 \Theta 6 matrix in Equation (16) has at most rank 5. Thus
all of its 6 \Theta 6 minors must vanish. There are
such minors which are algebraically independent,
and each of them gives a quadratic polynomial in a 1 , a 2
and ae as follows:
ae
By eliminating ae, we obtain a homogeneous quadratic equation
in a 1 and a
where
This quadratic equation may be easily solved for a 1 =a 2 .
Then ae is given by the following linear equation for each of
two solutions of a 1 =a 2
Thus, we obtain two possible solutions for the projection
Finally, the 6-vector (d; for the projection matrix M 00
is linearly solved from Equation (16) (for instance, using
SVD) in terms of M 0 .
C. 2D projective reconstruction-3D affine line direction
reconstruction
With the complete determination of the projection matrices
of the three views, the projective reconstruction
of "points" in P 2 , which is equivalent to the affine
reconstruction of "line directions" in IR 3 , can be performed.
From the projection equation each point of a
view homogeneous linear equation
in the unknown point x in P 2
2 are the first and second row vector of
the matrix M. With one point correspondence in three
views we have the following homogeneous
linear equation system,@
A
where designates a constant entry. This equation system
can be easily solved for x, either considered as a point in
or as an affine line direction in IR 3 .
V. Uncalibrated translations
To recover the full affine structure of the lines, we still
need to find the vectors t 3\Theta1 of the affine cameras defined
in (2). These represent the image translations and magnification
components of the camera. Recall that line correspondences
from two views-now a 2D view instead of 1D
view-do not impose any constraints on camera motion:
The minimum number of views required is three. If the
interpretation plane of an image line for a given view is defined
as the plane going through the line and the projection
center, the well-known geometric interpretation of the constraint
available for each line correspondence across three
views (cf. [3], [5]) is that the interpretation planes from
different views must intersect in a common line in space.
QUAN: AFFINE STRUCTURE FROM LINE CORRESPONDENCES 5
If the equation of a line in the image is given by
l T
then substituting produces the equation
of the interpretation plane of l in space:
l T A 3\Theta4
The plane is therefore given by the 4-vector p
which can also be expressed as
the normal vector of the plane.
An image line of direction nw can be written as
its interpretation plane being
The 2 \Theta 3 submatrices M 2\Theta3 representing uncalibrated
camera orientations have already been obtained from the
two-dimensional projective reconstruction. Now we proceed
to recover the uncalibrated translations.
For each interpretation plane (n x ; p) T of each image line,
its direction component is completely determined by the
previously computed fM;M as
Only its fourth component remains undetermined.
This depends linearly on t. Notice that as the direction
vector can still be arbitrarily and individually rescaled, the
interpretation plane should be properly written as
Hence the ratio = is significant, and this justifies the
homogenization of the vector t.
So far we have made explicit the equation of the interpretation
planes of lines in terms of the image line and the
projection matrix, the geometric constraint of line correspondences
on the camera motion gives a 3 \Theta 4 matrix
whose rows are the three interpretation planes@
which has rank at most two. Hence all of its 3 \Theta 3 minors
vanish. Only two of the total of four minors are
algebraically independent, as they are connected by the
quadratic identities [44].
The vanishing of any two such minors provides the two
constraints on camera motion for a given line correspondence
of three views. The minor formed by the first three
columns contains only known quantities. It provides the
constraint on the directions. It is easy to show that it is
equivalent to the tensor by using suitable one-dimensional
projective transformations.
By taking any two of the first three columns, say the first
two, and the last column, we obtain the following vanishing
determinant: fi fi fi fi fi fi
l T t
l 0T t 0
l 00T t 00
where the " " designates a constant entry.
Expanding this minor by cofactors in the last column gives
a homogeneous linear equation in t, t 0 and
\Theta \Theta \Theta
where the "\Theta" designates a constant 3-vector in a row.
Collecting all these vanishing minors together, we obtainB @
\Theta \Theta \Theta
\Theta \Theta \ThetaC A
n\Theta9@
for n line correspondences in three views.
At this stage, since the origin of the coordinate frame in
space is not yet fixed, we may take up to
a scaling factor, say t so the final homogeneous linear
equations to solve for
\Theta \Theta
\Theta \ThetaC A
n\Theta7@ t 0
This system of homogeneous linear equations can be nicely
solved by Svd factorisation. The least squares solution for
subject to jj(t is the right singular
vector corresponding to the smallest singular value.
VI. Affine shape
The projection matrices of the three views are now completely
determined up to a common scaling factor. From
now on, it is a relatively easy task to compute the affine
shape. Two methods to obtain the shape will be described,
one based on the projective representation of lines and another
on the minimal representation of lines, inspired by
[5].
A. Method 1: projective representation
A projective line in space can be defined either by a pencil
of planes (a pencil of planes is defined by two projective
planes) or by any two of its points.
The matrix
WP =@
6 IEEE-PAMI, VOL. *, NO. *, 199*
should have rank 2, so its kernel must also have dimension
2. The range of WP defines the pencil of planes and the
null space defines the projective line in space.
Once again, using Svd to factorize WP gives us everything
we want. Let
be the Svd of WP with ordered singular values. Two
points of the line might be taken to be v 3 and v 4 , so the
line is given by
One advantage of this method is that, using subset selection
[45], near singular views can be detected and discarded.
B. Method 2: Minimal representation
As a space line has 4 d.o.f., it can be minimally represented
by four parameters. One such possibility is suggested by [5]
which uses a 4-vector l such that the line
is defined as the intersection of two planes (1; 0; \Gammaa; \Gammax
and (0; 1; \Gammab; \Gammay 0 ) T with equations:
Geometrically this minimal representation gives a 3D line
with direction (a; b; 1) T and passing through the point
This representation excludes, therefore, the
lines of direction (a; b; parallel to the xy plane. Two
other representations are needed, each excluding either the
directions (0; b; c) T or (a; 0; c) T . These 3 representations
together completely describe any line in space.
In our case, we have no problem in automatically selecting
one of the three representations, as the directions of lines
have been obtained in the first step of factorisation, allowing
us to switch to one of the three representations. There
remain only two unknown parameters x 0 and y 0 for each
line.
To get a solution for x 0 and y 0 , as the two planes
defining the line belong
to the pencil of planes defined by WP , we can still
stack these two planes on the top of WP to get the matrix
Since this matrix still has rank 2, all its 3 \Theta 3 minors vanish.
Each minor involving x 0 and y 0 gives a linear equation in x 0
and y 0 . With n views, a linear equation system is obtained
An\Theta2
This can be nicely solved using least squares for each line.
VII. Affine-structure-from-lines theorem
Summarizing the results obtained above, we have established
the following.
For the recovery of affine shape and affine motion from
line correspondences with an uncalibrated affine camera,
the minimum number of views needed is three and the minimum
number of lines required is seven for a linear solu-
tion. There are always two solutions for the recovered affine
structure.
This result can be compared with that of Koenderink and
Doorn [14] for affine structure with a minimum of two
views and five points.
We should also note the difference with the well-known results
established for both calibrated and uncalibrated projective
cameras [3], [4], [5], [39]: A minimum of 13 lines in
three views is required to have a linear solution. It is important
to note that with the affine camera and the method
presented in this paper, the number of line correspondences
for achieving a linear solution is reduced from 13 to 7, which
is of great practical importance.
VIII. Outline of the 7-line \Theta 3-view algorithm
The linear algorithm to recover 3D affine shape/motion
from at least 7 line correspondences over three views with
uncalibrated affine cameras may be outlined as follows:
1. If an image line segment is represented by its end-points
the direction vector of the line
this as the homogeneous coordinates of a point in P 1 .
2. Compute the tensor components (T ijk ) defined by
Equation linearly with at least 7 lines in 3 views.
3. Retrieve the projection matrices fM;M of the
one-dimensional camera from the estimated tensor using
Equations (17), (18) and (16). There are always
two solutions.
4. Perform 2D projective reconstruction using equation
which recovers the directions of the affine lines
in space and the uncalibrated rotations of the camera
motion.
5. Solve the uncalibrated translation vector (t; t
using Equation (20) by linear least squares.
6. Compute the final affine lines in space using Equation
(21) or (22).
IX. Line-based factorisation method from an
image stream
The linear affine reconstruction algorithm described above
deals with redundant lines, but is limited to three views.
In this section we discuss redundant views, extending the
algorithm from the minimum of three to any number N ? 3
of views.
In the past few years, a family of algorithms for structure
from motion using highly redundant image sequences
called factorisation methods have been extensively studied
QUAN: AFFINE STRUCTURE FROM LINE CORRESPONDENCES 7
[18], [19], [20], [21], [22] for point correspondences for affine
cameras. Algorithms of this family directly decompose the
feature points of the image stream into object shape and
camera motion. More recently, a factorisation based algorithm
has been proposed by Triggs and Sturm [36], [37] for
3D projective reconstruction. We will accomodate our line-based
algorithm to this projective factorisation schema to
handle redundant views.
A. 2D projective reconstruction by rescaling
According to [36], [37], 3D projective reconstruction is
equivalent to the rescaling of the 2D image points. We
have already proven that recovering the directions of affine
lines in space is equivalent to 2D projective reconstruction
from one-dimensional projective images. Therefore, a re-construction
of the line directions in 3D can be obtained
by rescaling the direction vectors, viewed as points of P 1 .
For each 1D image point in three views (cf. Equation (6)),
the scale factors , 0 and 00 -taken individually-are ar-
bitrary. However, taken as a whole (;
the projective structure of the points x in P 2 .
One way to recover the scale factors (; is to use
the basic reconstruction equation (7) directly or alternatively
to observe the following matrix identity:@ M u
The rank of the left matrix is therefore at most 3. All 4 \Theta 4
minors vanish, and three them
are algebraically independent, for instance,
M u
Each of them can be expanded by cofactors in the last column
to obtain a linear homogeneous equation in ;
Therefore can be solved linearly using@
A@
where designate a known constant entry in the matrix.
For each triplet of views, the image points can be consistently
rescaled according to Equation (23). For the case
of n ? 3 views, we can take appropriate triplets among
n views such that each view is contained in at least two
triplets. Then, the rescaling equations of all triplets of
views for any given point can be chained together over n
views to give a consistent (;
B. Direction factorisation-step 1
Suppose we are given m line correspondences in n views.
The view number is indexed by a superscript and the line
number by a subscript. We can now create the 2n \Theta m
measurement matrix WD of all lines in all views by stacking
all the direction vectors d (j)
properly rescaled by (j)
of m lines in n views as follows:
wm
(n)
Since the following matrix equation holds for the measurement
the rank of WD is at most of three. The factorisation
method can then be applied to WD .
Let
be the Svd factorisation (cf. [45], [46]) of WD . The 3 \Theta 3
diagonal matrix \Sigma D3 is obtained by keeping the first three
singular values (assuming that singular values are ordered)
of \Sigma and UD3 (VD3 ) are the first 3 columns (rows) of U
(V).
Then, the product UD3 \Sigma D3V T
D3 gives the best rank 3 approximation
to WD .
One possible solution for "
D may be taken to be
For any nonsingular 3 \Theta 3 matrix A 3\Theta3 -either considered
as a projective transformation in P 2 or as an affine transformation
in
MA 3\Theta3 and "
D are also
valid solutions, as we have
This means that the recovered direction matrix "
D and the
rotation matrix "
are only defined up to an affine transformation
C. Translation factorisation-Step 2
We can stack all of the interpretation planes from different
views of a given line to form the following n \Theta 4 measurement
matrix of planes:
l T t
l 0T t 0
8 IEEE-PAMI, VOL. *, NO. *, 199*
This matrix WP geometrically represents a pencil of
planes, so it still has rank at most 2. For any three rows
of WP , taking any minor involving the t (i) , we
obtain fi fi fi fi fi fi fi
l (i) T
l (j) T t (j)
l
0:
Expanding this minor by cofactors in the last column gives
a homogeneous linear equation in t (i) , t (j) and t
\Theta \Theta \Theta
\Delta@
where each "\Theta" designates a constant 3-vector in a row.
Collecting all these minors together, we
\Theta \Theta \Theta 0 0
We may take up to a scaling factor, say
so the final homogeneous linear equations to solve for
are
\Theta \Theta 0 0
Once again, this system of equations can be nicely solved
by Svd factorisation of W T . The least squares solution
for subject to jj(t
the singular vector corresponding to the smallest singular
value of W T .
Note that the efficiency of the computation can be further
improved if the block diagonal structure of W T is
exploited.
D. Shape factorisation-Step 3
The shape reconstruction method developed for three views
extends directly to more than 3 views. Given n views, for
each line across n views, we just augment the matrix W p
from a 3 \Theta 4 to n \Theta 4 matrix, then apply exactly the same
method.
X. Outline of the line-based factorisation
algorithm
The line-based factorisation algorithm can be outlined as
follows:
1. For triplets of views, compute the tensor (T ijk ) associated
with each triplet, then rescale the directions of
lines of the triplet using Equation (23).
2. Chain together all the rescaling
factors (; for each line across the
sequence.
3. Factorise the rescaled measurement matrix of direction
to get the uncalibrated rotations and the directions of
the affine lines
4. Factorise the measurement matrix using the constraints
on the motion
to get the uncalibrated translation vector
5. Factorise the measurement matrix of the interpretation
planes for each line correspondence over all views
to get two points of the line
XI. Euclidean structure from the calibrated
affine camera
So far we have worked with an uncalibrated affine camera,
the recovered shape and motion are defined up to an affine
transformation in space. If the cameras are calibrated, then
the affine structure can be converted into a Euclidean one
up to an unknown global scale factor.
Following the decomposition of the submatrix M 2\Theta3 of
the affine camera A 3\Theta4 as introduced in [22],
the metric information from the calibrated affine camera
is completely contained in the affine intrinsic parameters
KK T . Each view with the associated uncalibrated rotation
is subject to
for the unknown affine transformation X which upgrades
the affine structure to a Euclidiean one. A linear solution
may be expected as soon as we have three views if we
QUAN: AFFINE STRUCTURE FROM LINE CORRESPONDENCES 9
solve for the entries of XX T . However it may happen that
the linear estimate of XX T is not positive-definite due to
noise. An alternative non-linear solution using Cholesky
parametrization that ensures the positive-definiteness can
be found in [22].
Once we obtain the appropriate "
carry the rotations of the camera and the directions of lines.
The remaining steps are the same as the uncalibrated affine
camera case.
If we take the weak perspective as a particular affine camera
model, with only the aspect ratio of the camera, Euclidean
structure is obtained this way.
XII. Experimental results
A. Simulation setup
We first use simulated images to validate the theoretical development
of the algorithm. To preserve realism, the simulation
is set up as follows. First, a real camera is calibrated
by placing a known object of about 50 cm 3 in front of the
camera. The camera is moved around the object through
different positions. A calibration procedure gives the projection
matrices at different positions, and these projection
matrices are rounded to affine projection matrices. Three
different positions which cover roughly 45 o of the field of
view are selected. A set of 3D line segments within a cube
of generated synthetically and projected onto the
different image planes by the affine projection matrices. All
simulated images are of size 512 \Theta 512. Both 3D and 2D
line segments are represented by their endpoints.
The noise-free line segments are then perturbed as follows.
To take advantage of the relatively higher accuracy of line
position obtained by the line fitting process in practice,
each 2D line segment is first re-sampled into a list of evenly
spaced points of the line segment. The position of each
point is perturbed by varying levels of noise of uniform
distribution. The final perturbed line is obtained by a least
squares fit to the perturbed point data.
Reconstruction is performed with 21 line segments and two
different re-sample rates. The average residual error is defined
to be the average distance of the midpoint of the
image line segment to the reprojected line in the image
plane from the 3D reconstructed line. In Table I, the average
residual errors of reconstruction are given with various
noise levels. The number of points used to fit the line
is the length of the line segment in pixels, this re-sample
rate corresponds roughly to the digitization process. Table
II shows the results with the number of points used
to fit the line equal to only one fourth the length of the
line segment. We can notice that the degradation with the
increasing noise level is very graceful and the reconstruction
results remain acceptable with up to \Sigma5:5 pixel noise.
These good results show that the reconstruction algorithm
is numerically stable. While comparing Table I and II, it
shows that higher re-sample rate gives better results, this
confirms the importance of the line fitting procedure-the
key advantage of line features over point features.
Another influential factor for the stability of the algorithm
is the number of lines used. Table III confirms that the
more lines used, the better the results obtained. In this
test, the pixel error is set to \Sigma1:5.
Lines # 8 13 17 21
Average residual error 1.9 1.6 0.59 0.26
III
Average residual errors of reconstruction with \Sigma 1.5 pixel
noise and various number of lines.
B. The experiment with real images
A Fujinon/Photometrics CCD camera is used to aquire a
sequence of images of a box of size 12 \Theta 12 \Theta 12:65cm. The
image resolution is 576 \Theta 384. Three of the frames in the
sequence used by the experiments are shown in Figure 1.
A Canny-like edge detector is first applied to each image.
The contour points are then linked and fitted to line segments
by least squares. The line correspondences across
three views are selected by hand. There are a total of 46
lines selected, as shown in Figure 2.
Fig. 2. Line segments selected across the image sequence.
The reconstruction algorithm generates infinite 3D lines,
each defined by two arbitrary points on it. 3D line segments
are obtained as follows. We reproject 3D lines into
one image plane. In the image plane selected, the corresponding
original image line segments are orthogonally projected
onto the reprojected lines to obtain the reprojected
line segments. Finally by back-projecting the reprojected
line segments to space, we obtain the 3D line segments,
each defined by its two endpoints.
Excellent reconstruction results are obtained. An average
residual error of one tenth of a pixel is achieved. Figure 3
shows two views of the reconstructed 3D line segments.
We notice that the affine structure of the box is almost
perfectly recovered.
Table
IV shows the influence of the number of line segments
Average residual error 0.045 0.061 0.10 0.15 0.20 0.25
I
Average residual errors with various noise levels for the reconstruction with 21 lines over three views. The number of
points to fit the line is the length of the line segment in pixels.
Average residual error 0.077 0.26 0.31 0.44 0.65 1.1
II
Average residual errors of reconstruction with various noise levels. The number of points to fit the line segment is one
fourth the length of the line segment.
Fig. 1. Three original images of the box used for the experiments.
used by the algorithm. The reconstruction results degrade
gracefully with decreasing number of lines.
Average residual error 1.3 0.88 0.28 0.12
IV
Table of residual errors of reconstruction with different
number of line segments.
Table
V shows the influence of the distribution of line segments
in space. For instance, one degenerate case for structure
from motion is that when all line segments in space lie
on the same plane. Actually, in our images, line segments
lie on three different planes-pentagon face, star shape face
and rectangle face-of the box. We also performed experiments
with line segments lying on only two planes. Table V
shows the results with various different two-plane configu-
rations. Compared with the three-plane configuration, the
reconstruction algorithm does almost equally well.
To illustrate the effect of using affine camera model as an
approximation to the perspective camera, we used a bigger
cube of size 30 \Theta 30 \Theta 30cm, which is two and a half times
the size of the first smaller cube. The affine approximation
to the perspective camera is becoming less accurate
than it was with the smaller cube. A sequence of images
of this cube is aquired in almost the same conditions as for
the smaller cube. The perspective effect of the big cube
is slightly more pronounced as shown in Figure 4. The
configuration of line segments is preserved. A total of 39
line segments of three views is used to perform the recon-
struction. Figure 5 illustrates two reprojected views of the
reconstructed 3D line segments. Compared with Figure 3,
the reconstruction is slightly degraded: in the top view of
Figure
5, we notice that one segment falls a little apart
from the pentagon face of the cube. Globally, the degradation
is quite graceful as the average residual error is only
0.3 pixels, compared with 0.12 pixels for the smaller cube.
The affine structures obtained can be converted to Euclidean
ones (up to a global scaling factor) as soon as we
know the aspect ratio [22], which is actually 1 for the camera
used. Figure 6 shows the rectified affine shape illustrated
in Figure 3. The two sides of the box are accurately
orthogonal to each other.
XIII. Discussion
A linear line-based structure from motion algorithm for
uncalibrated affine cameras has been presented. The algorithm
requires a minimum of seven line correspondences
over three views. It has also been proven that seven lines
over three views are the strict minimum data needed to recover
affine structure with uncalibrated affine cameras. In
other words, in contrast to projective cameras, the linear
algorithm is not based on the over-parametrization. This
gives the algorithm intrinsic stability. The previous results
of Koenderink and Van Doorn [14] on affine structure from
motion using point correspondences are therefore extended
to line correspondences. To handle the case of redundant
views, a factorisation method was also developed. The experimental
results based on real and simulated image sequences
demonstrate the accuracy and the stability of the
algorithms.
As the algorithms presented in this paper are developed
within the same framework as suggested in [22] for points,
it is straightforward to integrate both points and lines into
the same framework.
QUAN: AFFINE STRUCTURE FROM LINE CORRESPONDENCES 11
Line configuration star+rect.+pent. star+rect. pent.+rect. star+pent.
Average residual error 0.12 0.078 0.14 0.28
Table of residual errors of reconstruction with different data.
Fig. 3. Reconstructed 3D line segments: a general view and a top
view.
Fig. 4. One original image of the big cube image sequence.
Fig. 5. Two views of the reconstructed line segments for the big box:
a general view and a top view.
Fig. 6. A side view of the Euclidean shape obtained by using the
known aspect ratio of the camera.
Acknowledgement
This work was supported by CNRS and the French Min-
ist'ere de l'Education which is gratefully acknowledged. I
would like to thank D. Morris, N. Chiba and B. Triggs for
their help during the development of this work.
--R
"Deter- mination of the attitude of 3D objects from sigle perspective view"
Stereovision and Sensor Fusion
"Estimation of rigid body motion using straight line correspondences"
"A linear algorithm for motion estimation using straight line correspondences"
"Motion and structure from point and line matches"
3D Dynamique Scene Analysis
"Motion and structure from line correspondences: Closed-form solution, uniqueness, and op- timization"
"Optimal estimation of object pose from a single perspective view"
"Motion of points and lines in the uncalibrated case"
"A unified theory of structure from motion"
"A computer program for reconstructing a scene from two projections"
"Projective reconstruction from line correspon- dences"
The Interpretation of Visual Motion
"Affine structure from mo- tion"
"Affine shape representation from motion through reference points"
"Finding point correspondences and determining motion of a rigid object from two weak perspective views"
"Recursive affine structure and motion from image sequences"
"Shape and motion from image streams under orthography: A factorization method"
"Linear and incremental acquisition of invariant shape models from image sequences"
"3D motion recovery via affine epipolar geometry"
"A paraperspective factorization method for shape and motion recovery"
"Self-calibration of an affine camera from multiple views"
"Object pose: the link between weak perspective, para perspective, and full perspec- tive"
PhD thesis
"Recognition by linear combinations of models"
"A factorization method for affine structure from line correspondences"
Geometric Invariance in Computer Vision
"Obtaining surface orientation from texels under perspective projection"
"Perspective approximations"
"What can be seen in three dimensions with an un-calibrated stereo rig?"
"Stereo from uncalibrated cameras"
Matrice fondamentale et autocalibration en vision par ordinateur
"Canonic representations for the geometries of multiple projective views"
"Relative 3D reconstruction using multiple uncalibrated images"
"Invariants of six points and projective reconstruction from three uncalibrated images"
"Matching constraints and the joint image"
"A factorization based algorithm for multi-image projective structure and motion"
"On the geometry and algebra of the point and line correspondences between n images"
"Lines and points in three views - an integrated approach"
"Algebraic functions for recognition"
"Structure from motion using line correspondences"
"Dual computation of projective shape and camera positions from multiple images"
"Ac- tive visual navigation using non-metric structure"
Algorithms in Invariant Theory
Matrix Computation
Numerical Recipes in C
--TR
--CTR
Nassir Navab , Yakup Genc , Mirko Appel, Lines in One Orthographic and Two Perspective Views, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.7, p.912-917, July
Jong-Seung Park, Interactive 3D reconstruction from multiple images: a primitive-based approach, Pattern Recognition Letters, v.26 n.16, p.2558-2571, December 2005
Fredrik Kahl , Anders Heyden, Affine Structure and Motion from Points, Lines and Conics, International Journal of Computer Vision, v.33 n.3, p.163-180, Sept. 1999
Long Quan, Two-Way Ambiguity in 2D Projective Reconstruction from Three Uncalibrated 1D Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.2, p.212-216, February 2001
Yichen Wei , Eyal Ofek , Long Quan , Heung-Yeung Shum, Modeling hair from multiple views, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Adrien Bartoli , Peter Sturm, Structure-from-motion using lines: representation, triangulation, and bundle adjustment, Computer Vision and Image Understanding, v.100 n.3, p.416-441, December 2005
Magnus Oskarsson , Kalle strm , Niels Chr. Overgaard, The Minimal Structure and Motion Problems with Missing Data for 1D Retina Vision, Journal of Mathematical Imaging and Vision, v.26 n.3, p.327-343, December 2006
Kalle strm , Magnus Oskarsson, Solutions and Ambiguities of the Structure and Motion Problem for 1DRetinal Vision, Journal of Mathematical Imaging and Vision, v.12 n.2, p.121-135, April 2000
Olivier Faugeras , Long Quan , Peter Strum, Self-Calibration of a 1D Projective Camera and Its Application to the Self-Calibration of a 2D Projective Camera, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.10, p.1179-1185, October 2000
Kalle strm , Fredrik Kahl, Ambiguous Configurations for the 1D Structure and Motion Problem, Journal of Mathematical Imaging and Vision, v.18 n.2, p.191-203, March
Loong-Fah Cheong , Chin-Hwee Peh, Depth distortion under calibration uncertainty, Computer Vision and Image Understanding, v.93 n.3, p.221-244, March 2004
Ben Tordoff , David Murray, Reactive Control of Zoom while Fixating Using Perspective and Affine Cameras, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.1, p.98-112, January 2004
Hayman , Torfi Thrhallsson , David Murray, Tracking While Zooming Using Affine Transfer and Multifocal Tensors, International Journal of Computer Vision, v.51 n.1, p.37-62, January | structure from motion;one-dimensional camera;uncalibrated image;factorization method;affine structure;affine camera;line correspondence |
263446 | A Sequential Factorization Method for Recovering Shape and Motion From Image Streams. | AbstractWe present a sequential factorization method for recovering the three-dimensional shape of an object and the motion of the camera from a sequence of images, using tracked features. The factorization method originally proposed by Tomasi and Kanade produces robust and accurate results incorporating the singular value decomposition. However, it is still difficult to apply the method to real-time applications, since it is based on a batch-type operation and the cost of the singular value decomposition is large. We develop the factorization method into a sequential method by regarding the feature positions as a vector time series. The new method produces estimates of shape and motion at each frame. The singular value decomposition is replaced with an updating computation of only three dominant eigenvectors, which can be performed in O(P2) time, while the complete singular value decomposition requires O(FP2) operations for an FP matrix. Also, the method is able to handle infinite sequences, since it does not store any increasingly large matrices. Experiments using synthetic and real images illustrate that the method has nearly the same accuracy and robustness as the original method. | Introduction
Recovering both the 3D shape of an object and the motion of
the camera simultaneously from a stream of images is an
important task and has wide applicability in many tasks such
as navigation and robot manipulation. Tomasi and Kanade[1]
first developed a factorization method to recover shape and
motion under an orthographic projection model, and obtained
robust and accurate results. Poelman and Kanade[2] have
extended the factorization method to scaled-orthographic projection
and paraperspective projection. This method closely
approximates perspective projection in most practical situations
so that it can deal with image sequences which contain
perspective distortions.
O
O FP 2
Although the factorization method is a useful technique, its
applicability is so far limited to off-line computations for the
following reasons. First, the method is based on a batch-type
computation; that is, it recovers shape and motion after all the
input images are given. Second, the singular value decompo-
sition, which is the most important procedure in the method,
requires operations for features in frames.
Finally, it needs to store a large measurement matrix whose
size increases with the number of frames. These drawbacks
make it difficult to apply the factorization method to real-time
applications.
This report presents a sequential factorization method that
considers the input to the system as a vector time series of
feature positions. The method produces estimates of shape
and motion at each input frame. A covariance-like matrix is
stored instead of feature positions, and its size remains constant
as the number of frames increases. The singular value
decomposition is replaced with a computation, updating only
three dominant eigenvectors, which can be performed in
time. Consequently, the method becomes recursive.
We first briefly review the factorization method by Tomasi
and Kanade. We then present our sequential factorization
method in Section 3. The algorithm's performance is tested
using synthetic data and real images in Section 4.
2. Theory of the Factorization Method: Review
2.1 Formalization
The input to the factorization method is a measurement
matrix , representing image positions of tracked features
over multiple frames. Assuming that there are features over
frames, and letting be the image position of feature
at frame , is a matrix such that
O FP 2
O
Proceedings of 1994 ARPA Image Understanding Workshop, November, 1994, Monterey CA Vol II pp. 1177-1188
. (1)
Each column of contains all the observations for a single
point, while each row contains all the observed x-coordinates
or y-coordinates for a single frame.
Suppose that the camera orientation at frame is represented
by orthonormal vectors , , and , where corresponds to
the x-axis of the image plane and to the y-axis. The vectors
and are collected over frames into a motion matrix
such that
. (2)
Let be the location of feature in a fixed world coordinate
system, whose origin is set at the center-of-mass of all the
feature points. These vectors are then collected into a shape
matrix such that
. (3)
Note that
. (4)
With this notation, the following equation holds by assuming
an orthographic projection.
Tomasi and Kanade[1] pointed out the simple fact that the
rank of is at most 3 since it is the product of the
motion matrix and the shape matrix . Based on
this rank theory, they developed a factorization method that
robustly recovers the matrices and from .
2.2 Subspace Computation
The actual procedure of the factorization method consists of
two steps. First, the measurement matrix is factorized into
two matrices of rank 3 using the singular value decomposi-
tion. Assume, without loss of generality, that . By
computing the singular value decomposition of ,
we can obtain orthogonal matrices and
such that
In reality,
the rank of W is not exactly 3, but approximately 3. is
made from the first three columns of the left singular matrix
of . Likewise, consists of the first three singular values
and is made from the first three rows of the right singular
matrix. By setting
and (7)
we can factorize into
where the product is the best possible rank three approximation
to .
It is well known that the left singular vectors span the column
space of while the right singular vectors span its
row space. The span of , namely motion space, determines
the motion, and the span of , namely shape space, determines
the shape. The rank theory claims that the dimension of
each subspace is at most three, and the first step of the factorization
method finds those subspaces in the high dimensional
input spaces. Both spaces are said to be dual in the sense that
one of them can be computed from the other. This observation
helps us to further develop the sequential factorization
method.
2.3 Metric Transformation
The decomposition of equation (8) is not completely unique:
it is unique only up to an affine transformation. The second
step of the method is necessary to find a non-singular
matrix , which transforms and into the true solutions
and as follows.
Observing that rows and of must satisfy the normalization
constraints,
and , (11)
we obtain the system of overdetermined equations such
that
x 11 . x 1P
x F1 . x FP
y 11 . y 1P
y F1 . y FP
f
MS
U
U
U
where is a symmetric matrix
and, and are the rows of . By denoting
, , and
the system (12) can be rewritten as
where , , and are defined by
and
The simplest solution of the system is given by the pseudo-inverse
method such that
. (18)
The vector determines the symmetric matrix , whose
eigendecomposition gives . As a result, the motion and
the shape are derived according to equations (9) and (10).
The matrix is an affine transform which transforms into
in the motion space, while the matrix transforms
into in the shape space. Obtaining this transform is the main
purpose of the second step of the factorization method, which
we call metric transformation.
3. A Sequential Factorization Method
3.1
Overview
In the original factorization method, there was one measurement
matrix containing tracked feature positions through-out
the image sequence. After all the input images are given
and the feature positions are collected into the matrix , the
motion and shape are then computed. In real-time applica-
tions, however, it is not feasible to use this batch-type
scheme. It is more desirable to obtain an estimate at each
moment sequentially. The input to the system must be viewed
as a vector time series. At frame , two vectors containing
feature positions such that
and (19)
are given. Immediately after receiving these vectors, the system
must compute the estimates of the camera coordinates ,
and the shape at that frame. At the next frame, new samples
and arrive and new camera coordinates
and are to be computed as well as an updated
shape estimate .
The key to developing such a sequential method is to observe
that the shape does not change over time. The shape space is
stationary, and thus, it should be possible to derive from
without performing expensive computations.
More specifically, we store the feature vectors and in a
covariance-type matrix defined recursively by
. (20)
As shown later, the rank of is at most three and its three
dominant eigenvectors span the shape space. Once is
obtained, the camera coordinates at frame can be computed
simply by multiplying the feature vectors and the eigenvectors
as follows.
This framework makes it possible to estimate camera coordinates
immediately after receiving feature vectors at each
frame. All information obtained up to the frame is accumulated
in and used to produce the estimates at that frame.
In equation (20), the size of is fixed to , which only
depends on the number of feature points. Therefore, the algorithm
does not need to store any matrices whose sizes
increase over time.
The computational effort in the original factorization method
is dominated by the cost of the singular value decomposition.
In the framework above, we need to compute eigenvectors of
. Note that, however, we only need the first three dominant
eigenvectors. Fortunately, several methods exist to compute
only the dominant eigenvectors with much less computation
necessary to compute all the eigenvectors. Before describing
A
l 2 l 4 l 5
l 3 l 5 l 6
Gl c
- l R 6
- c R 3F
G
l 1
l 6
F
l G T G
l L
A M
f
f
the details of our algorithm, we briefly review these techniques
in the following section.
3.2 Iterative Eigenvector Computation
Among the existing methods which can compute dominant
eigenvectors of a square matrix, we introduce two methods,
the power method and orthogonal iteration[3]. The power
method is the simplest, which computes the most dominant
eigenvector, i.e., an eigenvector associated with the largest
eigenvalue. It provides the starting point for most other tech-
niques, and is easy to understand. The method of orthogonal
iteration, which we adopt in our method, is able to compute
several dominant eigenvectors.
3.2.1 Power Method
Assume that we want to compute the most dominant eigen-vectors
of an matrix . Given a unit 2-norm
, the power method iteratively computes a
sequence of vectors :
for
The second step of the iteration is simply a normalization that
prevents from becoming very large or very small. The
vectors generated by the iteration converge to the most
dominant eigenvector of . To examine the convergence
property of the power method, suppose that is diagonaliz-
able. That is, with an orthogonal
matrix , and . If
and , then it follows that
where is a constant. Since , equation
shows that the vectors point more and more accurately
toward the direction of the dominant eigenvector ,
and the convergence factor is the ratio .
3.2.2 Orthogonal Iteration
A straightforward generalization of the power method can be
used to compute several dominant eigenvectors of a symmetric
matrix. Assume that we want to compute dominant
eigenvectors of a symmetric matrix , where
. Starting with an matrix with orthonormal
columns, the method of orthogonal iteration generates a
sequence of matrices :
for
The second step of the above iteration is the QR factorization
of , where is an orthogonal matrix and is an upper
triangular matrix. The QR factorization can be achieved by
the Gram-Schmidt process. This step is viewed as a normalization
process that is similar to the normalization used in the
power method.
Suppose that is the eigendecomposition
of with an orthogonal matrix ,
and . It has been shown in [3] that the
subspace generated by the iteration converges
to at a rate proportional to ,
i.e.,
where and is a constant. The function
dist represents the subspace distance defined by
The method offers an attractive alternative to the singular
value decomposition in situations where is a large matrix
and a few of its largest eigenvalues are needed. In our case,
these conditions clearly hold. In addition, the rank theory of
the factorization method[1] guarantees that the ratio
is very small, and as a result, we should achieve fast convergence
for computing the first three eigenvectors.
3.3 Sequential Factorization Algorithm
As in the original method, the sequential factorization method
consists of two steps, sequential shape space computation and
sequential metric transformation.
3.3.1 A Sequential Shape Space Computation
In the sequential factorization method, we consider the feature
vectors, and , as a vector time series. Let us denote
the measurement matrix in the original factorization method
at frame by . Then, it grows in the following manner:
- BX diag l 1 . l n
l j
l 1
l 1 l 2 . l n
range
span x 1 . x p
{ } l p 1
dist range Q k
l p
dist range Q k
l 4 l 3
, , . (26)
Now let us define a matrix time series by
. (27)
From the definition, it follows that
. (28)
Since the rank of is at most three, the rank of is also at
most three. If
is the singular value decomposition of , where
and are orthogonal matrices, and
, then
. (30)
This means the eigenvectors of are equivalent to the right
singular vectors of . Hence, it is possible to obtain the
shape space by computing the eigenvectors of .
To compute , we combine orthogonal iteration with updating
by equation (27). Given a matrix with orthonormal
columns and a null matrix , the following
algorithm generates a sequence of matrices :
[Algorithm (1)] for
(1)
(2)
The index corresponds to the frame number and each iteration
is performed frame by frame. The matrix generated
by the algorithm is expected to converge to the eigenvectors
of . While the original orthogonal iteration works with a
fixed matrix, the above algorithm works with the matrix ,
which varies from iteration to iteration incorporating new fea-
tures. In other words, the sequential factorization method
folds the update of into the orthogonal iteration. If the
randomly changes over time, no convergence is
expected to appear. However, it can be shown that
, for all . (31)
Therefore, is stationary and converges
to as in the orthogonal iteration. Even
when noise exists, if the noise is uncorrelated or the noise
space is orthogonal to the signal space , then
is still equal to and the convergence
can be shown. The following convergence rate of the algorithm
is deduced from the convergence rate of the orthogonal
iteration.
3.3.2 Stationary Basis for the Shape Space
Algorithm (1) presented in the previous section produces the
matrix , which converges to the matrix that spans the
shape space. The true shape and motion are determined from
the shape space by a metric transformation. It is not straight-forward
at this point, however, to apply the metric transformation
sequentially. The problem is that, even though
is stationary, the matrix itself changes as the
number of frames increases. This is due to the nature of singular
vectors. They are the basis for the row and column sub-spaces
of a matrix, and the singular value decomposition
chooses them in a special way. They are more than just
orthonormal. As a result, they rotate in the 3D subspace
. Recall that the matrix obtained in metric
transformation (9) is a transform from (or ) to in
the subspace . Since changes at each frame,
also changes. Consequently, the matrix also changes
frame by frame.
For clarity, let us denote an matrix at frame as . The
fact that changes at each frame makes it difficult to estimate
iteratively and efficiently. Thus we need to add an
additional process to obtain stationary basis for the shape
space to update matrix .
Let us define a projection matrix onto the
by
where is the output from Algorithm (1). Needless to say,
the rank of is at most three. Since
(= ) is stationary, the projection matrix must
be stationary. It is thus possible to obtain the stationary basis
for the shape space by replacing with the eigenvectors of
An iterative method similar to Algorithm (1) can be used to
reduce the amount of computation. Given a matrix
with orthonormal columns, the iterative method below generates
a matrix , which provides the stationary basis
for the shape space.
U
Z
f
range
range
range
range
range
range
dist range Q f
f
range
range
range
A f A f
A f
A f
A f
range
range
[Algorithm (2)] for
3.3.3 Sequential Metric Transformation
In the previous section, we derived the shape space in terms of
. Once is obtained, it is possible to compute camera
coordinates and as
These coordinates are used to solve the overdetermined equations
(12) and the true camera coordinates are recovered in
the same way as in the original method. Doing so, however,
requires storing all the coordinates and , the number of
which may be very large. Instead, we use the following
sequential algorithm.
[Algorithm (3)] for
Let and be the matrices and at frame , where
and are defined in Section 2.3 From the definition, it follows
that
. (36)
Assigning equations (35) and (36) to equation (18), we have
which gives the symmetric matrix . The eigendecomposition
of yields the affine transform and, as a result, the
camera coordinates and the shape are obtained as follows:
Algorithm (3) followed by equations (37), (38), and (39)
completes the sequential method. The size of matrices and
are fixed to and , and the method does not
store any matrices that grow, even in the sequential metric
transformation.
4. Experiments
4.1 Synthetic Data
In this section we compare the accuracy of our sequential factorization
method with that of the original factorization
method. Since both methods are essentially based on the rank
theory, we do not expect any difference in the results. Our
purpose here is to confirm that the sequential method has the
same accuracy of shape and motion recovery as the original
method.
4.1.1 Data Generation
The object in this experiment consists of 100 random feature
points. The sequences are created using a perspective projection
of those points. The image coordinates of each point are
perturbed by adding Gaussian noise, which we assume to
simulate tracking error and image noise. The standard deviation
of the Gaussian noise is set to two pixels of a
pixel image. The distance of the object center from the camera
is fixed to ten times the object size. The focal length is
chosen so that the projection of the object covers the whole
image. The camera is rotated as shown in Figure 1,
while the object is translated to keep its image at the image
center. Quantization errors are not added since we assume
that we are able to track features with a subpixel resolution.
When discussing the accuracy of the sequential method, one
needs to consider its dynamic property regarding the 3D
recovery. The accuracy of the recovery at a particular frame
by the sequential method depends on the total amount of
motion up to that time, since the recovery is made only from
the information obtained up to that time. At the beginning of
an image sequence, for example, the motion is generally
small, so high accuracy can not be expected. The accuracy
generally improves as the motion becomes larger. The original
method does not have this dynamic property, since it is
based on a batch-type scheme and uses all the information
throughout the sequence.
In order to compare both methods under the same conditions,
we perform the following computations beforehand. First, we
form a submatrix , which only contains the feature positions
up to frame . The original factorization is applied to the
submatrix, then the results are kept as solutions at frame .
They are the best estimates given by the original method.
Repeating this process for each frame, we derive the best esti-
mates, with which our results are compared.
4.1.2 Accuracy of the Sequential Shape Space
Computation
We first discuss the convergence property of the sequential
shape space computation. The sequential factorization method
starts with Algorithm (1) in Section 3.3.1, iteratively generating
the matrix which is an estimate for the true shape space
. Let us represent the estimation error with respect to the
c
l f D f- E f
A f
A f
f
f
true shape space by
Recall that the function dist provides a notion of difference
between two spaces. On the other hand, the original method
produces the best estimate for the shape space by computing
the right singular vectors of the submatrix , and its
error with respect to the true shape space is also represented by
Comparing both errors, Figure 2 shows that they are almost
identical. That is, the errors given by the sequential method
are almost equal to those given by the original method.
At the beginning of the sequence, the amount of motion is
small and both errors are relatively large. The ratio of the 4th
to 3rd singular values, shown in Figure 3, also indicates that it
is difficult to achieve good accuracy at the beginning. Both
errors, however, quickly become smaller as the camera
motion becomes larger. After about the 20th frame, constant
errors of are observed in this experiment.
The solutions given by the two methods are so close that the
graphs are completely overlapped. Thus, we also plot their
difference defined by
in
Figure
4. Although is relatively large at the beginning,
it quickly becomes very small. In fact, after about the 30th
frame, is less than , while and are both
dist span Q f
dist span Q f
Frame103050Rotation
(deg.)
Roll
Pitch
Yaw
Figure
1: True camera motion
The camera roll, pitch, and yaw are varied as shown in
this figure. The sequence consists of 150 frames.
Frame
Subspace
distance
Sequential factorization
Original factorization
Figure
2: Shape space errors
Shape space estimation errors by the sequential method
(solid line) and the original method (dashed line) with
respect to the true shape space. The errors are defined by
subspace distance and plotted logarithmically.
Frame
Figure
3: Singular value ratio
The ratio of the 4th to 3rd singular values, that is .
Frame
Difference
Figure
4: Difference of shape space errors
The difference of the estimates by the sequential and original
methods, versus the frame number. The difference is
plotted logarithmically.
4.1.3 Accuracy of the Motion and Shape Recovery
The three plots of Figure 5 show errors in roll, pitch, and yaw
in the recovered motion: the solid lines correspond to the
sequential method, the dotted lines to the original method.
The difference in motion errors between the original and
sequential methods is quite small.
Both results are unstable for a short period at the beginning of
the sequence. After that, they show two kinds of errors: random
and structural. Random errors are due to Gaussian noise
added to the feature positions. Structural errors are due to perspective
distortion, and relate to the motion patterns. The
structural errors show a negative peak at about the 60th frame
and are almost constant between the 90th and 120th frames.
Note the pattern corresponds to the motion pattern shown in
Figure
1.
Of course, these intrinsic errors cannot be eliminated in the
sequential method. The point to observe is that the differences
between the two solutions are sufficiently smaller than the
intrinsic errors.
Shape errors which are compared in Figure 6 also indicate the
same results. Again, the differences between the two methods
are quite small compared to the intrinsic errors which the
original method possesses. Note that no Gaussian noise
appears in the shape errors since they are averaged over all
the feature points.
We conclude from these results that the sequential method is
nearly as accurate as the original method except that some
extra frames are required to converge.
4.2 Real Images
Experiments were performed on two sets of real images. The
first set is an image sequence of a satellite rotating in space.
Another experiment uses a long video recording (764 images)
of a house taken with a hand-held camera. These experiments
demonstrate the applicability of the sequential factorization
method in real situations. In both experiments, features are
selected and tracked using the method presented by Tomasi
and Kanade[1].
4.2.1 Satellite Images
Figure
7 shows an image of the satellite with selected features
indicated by small squares. The image sequence was digitized
from a video recording[4] actually taken by a space shuttle
astronaut. The feature tracker automatically selected and
tracked features throughout the sequence of 101 images.
Of these, five features on the astronaut maneuvering around
the satellite were manually eliminated because they had a different
motion. Thus, the remaining 27 features were pro-
Frame
-0.40.4Yaw
error
(deg.)
Sequential
Original
Frame
-0.40.4Pitch
error
(deg.)
Sequential
Original
Frame
-0.40.4Roll
error
(deg.)
Sequential
Original
Figure
5: Motion errors
Errors of recovered camera roll (top), pitch (middle), and
yaw (bottom). The errors given by the sequential method
are plotted with solid lines, while the errors given by the
original method are plotted with dotted lines.
Frame
Shape
error
Sequential
Original
Figure
This figure compares the shape errors given by the two
method. The errors given by the sequential method are
plotted with solid lines, while the errors given by the original
method are plotted with dotted lines. The errors are
computed as the root-mean-square errors of the recovered
shape with respect to the true shape, at each frame.
cessed. Figure 8 shows the recovered motion in terms of roll,
pitch, and yaw. The side view of the recovered shape is displayed
in Figure 9, where the features on the solar panel are
marked with opaque squares and others with filled squares.
No ground-truth is available for the shape or the motion in
this experiment. Yet, it appears that the solutions are satisfac-
tory, since the features on the solar panel almost lie in a single
line in the side view.
4.2.2 House Images
Figure
shows the first image of the sequence used in the
second experiment. Using a hand-held camera, one of the
authors took this sequence while walking. It consists of 764
images which correspond to about 25 seconds. The feature
tracker detected and tracked 62 features. The recovered
motion and shape are shown in Figures 11 and 12. It is clearly
seen that the shape is qualitatively correct. It is also reason-able
to observe that only the camera yaw is increasing,
because the camera is moving parallel to the ground. In addi-
tion, note that the computed roll motion reveals the pace of
the recorder's steps, which is about 1 step per second.
Further evaluation of accuracy in these experiments is diffi-
cult. However, this qualitative analysis of the results with real
images, and quantitative analysis of the results with synthetic
data essentially shows that the sequential method works as
well with real images as the original batch method.
4.3 Computational Time
Finally, we compare the processing time of the sequential
method with the original method. The computational complexity
of the original method is dominated by the cost of the
singular value decomposition, which needs
computations for a measurement matrix with
[5]. Note that corresponds to the number of frames and
to the number of features. On the other hand, the complexity
of the sequential method is . Computing the solution
for frame F, therefore, takes only using the
sequential method, while the original method would require
operations.
Figure
13 shows the actual processing time of the sequential
54P
O
O FP 2
Figure
7: An image of a satellite
The first frame of the satellite image sequence. The
superimposed squares indicate the selected features.
Frame261014
Rotation
(deg.)
Roll
Pitch
Yaw
Figure
8: Recovered motion of satellite
Recovered camera roll (solid line), pitch (dashed line), and
(dotted line) for the satellite image sequence.
Figure
9: Side view of the recovered shape
A side view of the recovered shape of the satellite. The features
on the solar panel are shown with opaque squares and
others with filled squares. Notice that the features on the
solar panel correctly lie in a single plane.
Figure
10: An image of a house
The first frame of the house image sequence. The super-imposed
squares indicate the selected features.
Frame515Rotation
(deg.)
Roll
Pitch
Yaw
Figure
Recovered motion of house
Recovered camera roll (solid line), pitch (dashed line), and
(dotted line) for the house image sequence.
Figure
12: Top view of the recovered shape
A view of the recovered shape of the house from above.
The features on the two side walls are correctly recovered.
Figure
13: Processing time
The processing time of the sequential method on a Sun4/
compared with that of the original method
(dotted line), as a function of the number of features
which is varied from 10 to 500. The number of frames is
fixed at 120.
Number of features
Time
(ms)
Sequential
Original
method on a Sun4/10 compared together with that of the original
method. The number of features varied from 10 to 500,
while the number of frames was fixed at 120. The processing
time for selecting and tracking features was not included. The
singular value decomposition of the original method is based
on a routine found in [6]. The results sufficiently agree with
our analysis above. In addition, when the number of features
is less than 40, the sequential method can be run in less than
1/30 of a second, which enables video-rate processing on a
5. Conclusions
We have presented the sequential factorization method, which
provides estimates of shape and motion at each frame from a
sequence of images. The method produces as accurate and
robust results as the original method, while significantly
reducing the computational complexity. The reduction in
complexity is important for applying the factorization method
to real-time applications. Furthermore, the method does not
require storing any growing matrices so that its implementation
in VLSI or DSP is feasible.
Faster convergence in the shape space computation could be
achieved using more sophisticated algorithms such as the
orthogonal iteration with Ritz acceleration[3] instead of the
basic orthogonal iteration. Also, it is possible to use scaled
orthographic projection or paraperspective projection[2] to
improve the accuracy of the sequential factorization method.
Acknowledgments
The authors wish to thank Conrad J. Poelman and Richard
Madison for their helpful comments.
--R
"Shape and Motion from Image Streams Under Orthog- raphy: A Factorization Method,"
A paraperspective factorization method for shape and motion recovery
Matrix computa- tions
Satellite rescue in space: highlights of shuttle flights 41C
Tracking a few extreme singular values and vectors in signal processing
Numerical recipes in C: the art of scientific computing
--TR
--CTR
Xiaolong Xu , Koichi Harada, Sequential projective reconstruction with factorization, Machine Graphics & Vision International Journal, v.12 n.4, p.477-487, April
Pui-Kuen Ho , Ronald Chung, Stereo-Motion with Stereo and Motion in Complement, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.2, p.215-220, February 2000
Yiannis Xirouhakis , Anastasios Delopoulos, Least Squares Estimation of 3D Shape and Motion of Rigid Objects from Their Orthographic Projections, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.4, p.393-399, April 2000
Pedro M. Q. Aguiar , Jos M. F. Moura, Rank 1 Weighted Factorization for 3D Structure Recovery: Algorithms and Performance Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.9, p.1134-1049, September
Pei Chen , David Suter, An Analysis of Linear Subspace Approaches for Computer Vision and Pattern Recognition, International Journal of Computer Vision, v.68 n.1, p.83-106, June 2006
Lin Chai , William A. Hoff , Tyrone Vincent, Three-dimensional motion and structure estimation using inertial sensors and computer vision for augmented reality, Presence: Teleoperators and Virtual Environments, v.11 n.5, p.474-492, October 2002 | image understanding;feature tracking;shape from motion;3D object reconstruction;real-time vision;singular value decomposition |
263449 | On the Sequential Determination of Model Misfit. | AbstractMany strategies in computer vision assume the existence of general purpose models that can be used to characterize a scene or environment at various levels of abstraction. The usual assumptions are that a selected model is competent to describe a particular attribute and that the parameters of this model can be estimated by interpreting the input data in an appropriate manner (e.g., location of lines and edges, segmentation into parts or regions, etc.). This paper considers the problem of how to determine when those assumptions break down. The traditional approach is to use statistical misfit measures based on an assumed sensor noise model. The problem is that correct operation often depends critically on the correctness of the noise model. Instead, we show how this can be accomplished with a minimum of a priori knowledge and within the framework of an active approach which builds a description of environment structure and noise over several viewpoints. | Introduction
(a) (b)
(c)
a) A shaded range image scanned from above a wooden mannequin
lying face down.
Superellipsoids fitted to segmented data from (a). The dark dots
are the range data points. Note that the mannequin's right arm
has failed to segment and only a single model has been fitted
where two would have been preferable
c) Detail of the superellipsoid fitted to the mannequin's right arm
in (b). The dark lines on the surface show the position of the
range scans. Although the model doesn't match our perceptual
notions of what the arm should look like, it does fit the data
well.
Figure
1. Superellipsoid models fitted to a range data from a Wooden Mannequin
Many strategies in computer vision assume the existence of general purpose models
that can be used to characterize a scene or environment at various levels of abstrac-
tion. They span the range from local characterizations of orientation and curvature
[3, 24], to intermediate level representations involving splines and parametric
surfaces [1,7,8,24], to still more global representations for solid shape [5,14,19]. The
usual assumptions are that a selected model is competent to describe a particular
2 On the Sequential Determination of Model Misfit
attribute, and that the parameters of this model can be estimated by appropriate interpretation
of input data. But many of these estimation problems are ill-conditioned
inverse problems that cannot be solved without additional constraints derived from
knowledge about the environment [15]. This leads to a classical chicken and egg problem
where model selection and parameter estimation must be dealt with concurrently,
a problem difficult to solve given a single static view of the world.
In this paper we describe an active strategy that permits solution of both prob-
lems, i.e. model parameter estimation and model validation. The context is a system
for computing an articulated, 3-D geometric model of an object's shape from a sequence
of views obtained by a mobile sensor (laser rangefinder) that is free to select
its viewpoint [23]. Shape is characterized by general purpose models consisting of
conjunctions of volumetric primitives [5]. An active approach is used where the current
state of the model, determined from a bottom-up analysis, is used to predict
the locations of surfaces not visible in the current view. Gaze is directed to surfaces
where the prediction is least certain (maximum variance), and from there additional
measurements are made and used to update the model parameters. The validity of
the model is tested against its ability to correctly predict the locations of hidden sur-
faces. Initially both the applicability of the model and estimates of its parameters are
uncertain, but as the process unfolds with each successive planning cycle (calculation
of new gaze point, measurement, updating of model parameters), such assessments
become increasingly clear.
The emphasis of this paper is the model validation problem. Knowing when a particular
model fails can provide at least two significant pieces of information. First, it
can indicate when assumptions about the scene are wrong and trigger the search for
other models that provide a plausible alternative, that is, it can initiate a model selection
process. Second, it can indicate when the processes leading to the determination
of model parameters have gone awry. This can be used to initiate a backtracking
procedure to re-interpret the data, particularly if the validation procedure is also
able to indicate the location of where the model breaks down. Such would be the
case if a model is known to be valid but insufficient data are available from which to
correctly apply the model or estimate its parameters.
The example shown in Figure 1 is a case in point and part of the motivation for this
research. Figure 1a shows a laser rangefinder image of a wooden mannequin rendered
as a shaded image. Based on analysis of surface features, the image is partitioned
into regions corresponding to the different parts of the mannequin [6]. A further
abstraction is computed by fitting superquadric primitives to each region with the
result shown in Figure 1b [5]. At first glance the result appears to capture each of
2. Estimating Parameters and Planning Gaze 3
the parts of the mannequin. However, on closer examination (Figure 1c), it can be
seen that the partitioning algorithm has missed the cues separating the arm at the
right elbow. Superquadrics are appropriate shape descriptors provided that parts are
convex, but as Figure 1c shows, do not fit the data well otherwise.
contextual knowledge, it is difficult to detect such an error given a single
view of the object because there is little basis from which to reject the resulting fit.
One would have to know the loci of the occluded surfaces in order assess the model's
true fit to the data. However such knowledge is often not possible, e.g. inaccessible
viewpoints; or is expensive to obtain, e.g. time required to acquire measurements.
The compromise advocated in this paper is a sequential process that incrementally
builds its descriptions by optimizing measurement to maximize the certainty of each
model, then tests them by verifying their consistency from view to view.
The remainder of the paper is as follows. Section 2 begins with a brief overview of
optimization strategy used to plan gaze and estimate model parameters. It provides
the necessary background for Section 3 which describes the model validation process,
and presents the results of experiments which demonstrate the resulting algorithms
at different noise levels and for different noise models. Section 4 shows how the
situation shown in Figure 1c can be identified using the gaze planning strategy and
model validation procedures. Finally, we conclude with some observations and briefly
outline remaining work.
2. Estimating Parameters and Planning Gaze
In earlier work we have considered the problem of how to best direct the gaze of a
laser range scanner in order to improve estimates of model parameters and knowledge
of object surface position over a sequence of views [20, 21, 23].
The laser scanner is capable (after appropriate transformations) of providing the
3-D coordinates of points sampled from surfaces in the scene. In this scenario it is
assumed that the scene is well represented by a conjunction of parametric volumetric
primitives, and that data is collected by moving the scanner around on the end-effector
of a robot arm (Figure 2). Using methods described in [5] the data are
partitioned into sets corresponding to the parts of each visible object. It is assumed
that each data set corresponds to a sample of the surface of a single model 1 .
Given one of these data sets fx i we wish to infer those parameters
"a that best estimate the true parameters a of the model in the scene from which
the data was collected. In general an exact solution cannot be found because the
1 In this paper superellipsoids are used to represent parts, but the approach generalizes to other
parametric models.
4 On the Sequential Determination of Model Misfit
Figure
2. Mobile scanner setup. A laser rangefinder with a 1m 3 field
of view is mounted on the end-effector of an inverted Puma 560 robot.
scanner measurements are subject to both systematic and random errors, but a good
estimate can be obtained by finding the parameters that minimize the squared sum
(1)
of distances D(x a) of the data points from the surface of the model. Except for very
simple models and distance metrics one usually must resort to iterative techniques,
e.g. the Levenburg-Marquardt method [12, 16], to perform the minimization.
Provided the estimated parameters fall within the region of parameter space around
the true parameters where D is reasonably approximated by its first order linear
terms, the classic statistical theory of linear models can describe the parameter errors
[16]. This theory tells us that when the errors described by the distance metric
are randomly sampled from a normal distribution then the error in the estimated
parameters a can be described by a p-variate normal distribution dispersed
2. Estimating Parameters and Planning Gaze 5
in the different parameter directions by an amount determined from the matrix of
covariances C. Furthermore the quadratic form ffia T C \Gamma1 ffia that defines the distribution
is itself randomly sampled from a distribution that obeys a chi-square law with
degrees of freedom. In that case we can find the point of the chi-square distribution
and use it to define the ellipsoid of confidence
(2)
that is an ellipsoidal region of parameter space around the estimated parameters and
in which there is a probability of fl that the true parameters lie.
Because of the noise in the model, and because the data are often incompletely
sampled, e.g. only one side of the model is visible from a single viewpoint, the
parameters will often be under constrained and exhibit large estimation errors. These
errors can be reduced by collecting more data, but there are liabilities in terms of cost
and accessibility; e.g. the time taken to plan and move the scanner, memory and cpu
resources consumed to process additional data, and limits on accessible viewpoints.
Ideally we would like to minimize the amount of data collected and the complexity
of the movements necessary to place the scanner in the correct position. To do so
requires the formulation of a precise relationship between the parameters that govern
the data acquisition process and those related to the model being fit.
This task is somewhat difficult because the scanner collects data in the 3-D space of
the scene, thus making it difficult to predict the effect that newly collected data points
will have on the parameter errors in the p-dimensional space of model parameters.
The approach that we have taken to solve this problem is to think of the estimated
model as a predictor of surfaces in the scene, and to quantify this error in terms
of an interval around each point on the predicted surface. We call this the surface
prediction error interval and have shown [20] that an "error bar" protruding from a
point x s on the estimated model's surface is given by the quantity
s
@D
@a
@a
where (@D=@a) is the gradient of the distance metric evaluated for the point x s on
the surface of model "a, and \Delta 2
fl is a confidence interval chosen from a chi-square
distribution as for the ellipsoid of confidence in (2).
The practical use of this representation for optimizing data collection via gaze
planning can be explained with the aid of Figure 3. The figure shows the surface
prediction error interval corresponding to the model fit to the arm shown earlier in
Figure 1c. In Figure 3a, the interval is coded such that darker shading represents
6 On the Sequential Determination of Model Misfit
higher uncertainty in surface positions as predicted by the model. Even though
the data leading to the model are acquired from a single viewpoint, the resulting
prediction extends beyond the visible surfaces and can thus serve as a basis for
planning the next gaze direction. An intuitive strategy for doing so would be to direct
the scanner to the viewpoint corresponding to the highest uncertainty of prediction.
Theoretically we can show that updating model parameters with additional data
obtained from this view will minimize the determinant of the parameter covariances
[22, 23].
Figure 3a shows a parameterization of the uncertainty surface in the coordinates
of a view sphere centered on the the model (uncertainty map), and as can be seen the
uncertainty is lowest at the current scanner position, but rises rapidly to a maximum
on the opposite side of the view sphere. The optimum strategy here is to move the
scanner to the other side of the model, to sample additional data there, and to update
the model parameters. However the general problem of gaze planning is much more
complex than implied by our example. First, the prediction afforded by the surface
error prediction interval is local, so it is unlikely that a complete set of constraining
views can be determined on the basis of the model computed from a single viewpoint.
In fact the additional data will completely alter the uncertainty map so it must be
recomputed after each iteration. Second, the prediction does not take accessibility
constraints into account, e.g. certain views may either be unreachable by the scanner,
or occluded by surfaces not visible from the current viewpoint. So, as in our example,
it is often the case that "the other side" of the model cannot be reached.
In spite of these difficulties, we have found that using uncertainty to plan incremental
displacements of gaze angle relative to the current viewpoint can result in
a successful strategy [21, 23]. We apply a hill climbing algorithm to the changing
uncertainty map and use the resulting path to guide the trajectory of the mobile
scanner. This can result in a near optimum a data collection strategy with respect to
the rate of convergence of model parameters. Also, lack of accessibility is often not
that great a problem. For example when representing convex surfaces with superel-
lipsoid primitives we have observed that well constrained parameter estimates can
be obtained by taking data with the scanner displaced approximately
of the initial gaze position. This is because the model can interpolate across large
"holes" in the data set.
However the success of the exploration strategy hinges on the central assumption
that the model fits the data. If this is not the case the parameter covariances,
and therefore the surface prediction error, do not accurately reflect the constraints
2. Estimating Parameters and Planning Gaze 7
Latitude
a b
The figure shows predicted surface uncertainty
as an uncertainty map (a) where U is plotted as a function of view
sphere latitude and longitude, and as an uncertainty surface (b) where the surface of
the model is shaded such that darker shading corresponds to higher values of U .
The lines on the top of the uncertainty surface show the data collected when the scanner
was positioned at the north pole of the view sphere (latitude=90 ffi ), and to which the model
was fitted. As can be seen U is low where data exists, but increases as the model attempts
to extrapolate away from the data. The maximum uncertainty lies under the sharp ends
of the model, and is marked by the tall peaks on the uncertainty map.
The scanner is initially located at the right edge of the uncertainty map. When it moves
to the next view position it will follow the local uncertainty gradient, and will therefore
move up the center of the broad ridge extending out between the two peaks, i.e. towards
the south pole along a longitude of approximately 220 ffi . This corresponds to a path that
samples the side of the model facing the viewer in (b).
Figure
3. Two different representations for the surface prediction error
interval
8 On the Sequential Determination of Model Misfit
placed on the model by the data. Thus to ensure a meaningful sensor trajectory it
is necessary to test the validity of the model at each iteration.
3. The Detection of Misfits
Implicit in our "bottom-up" approach to vision is the notion of "increasing speci-
ficity" as processing moves from the lower to the higher layers. By doing things this
way we can build computationally intensive lower layers that operate very generally,
yet still provide usable data to higher layers designed for specific tasks. However specialized
algorithms are usually tuned to a set of assumptions more restrictive than
can be truthfully applied to input data processed by the lower layers. Consequently
it is necessary to check the validity of the data before proceeding.
Such a necessity becomes apparent when we fit volumetric models to segmented
range data. The segmentation algorithm we use [5] deliberately avoids detailed assumptions
about the exact shape of the primitives (e.g. that they be symmetric) and
requires only that they be convex. To this segmented data we fit models designed to
represent the kinds of shapes expected in the world. In our case, because they can
economically portray a wide range of symmetrical shapes, we use superellipsoids. The
problem is that not all convex shapes are superellipsoids, so while the segmentation
algorithm may have correctly processed its input data, there is no guarantee that a
valid superellipsoid model can be made to fit it.
The most straightforward means of evaluating the validity of the data is simply
to fit and see. If the model fits well then all of the data should lie on or close to its
surface. If not there will be significant residual errors, the model can be declared a
misfit, and the flow of processing altered to take remedial action. Because the data
are subject to random fluctuations it is not possible to conclude that there has been
misfit (or that there has not) with complete certainty. We show how to deal with
this problem using methods found in the statistical field of decision theory [4, 13].
In the theory that follows we develop three lack-of-fit statistics, each one useful
in different situations. The first of these (/L 1 ) requires an accurate model of the
data noise, and knowledge of the parameters of that model, in particular the value
of the noise level. When the noise level is not known but the noise model is, then
the second lack-of-fit statistic (/L 2 ) can be used. It requires repeat measurements of
the data in order to provide an independent estimate of data noise, and therefore
incurs additional time and processing costs (e.g. it takes about 12 seconds to scan a
256 \Theta 256 image with the McGill-NRC scanner). In situations where a rapid response
is required, and where the noise not known, we propose an incremental lack-of-fit
statistic (/L 3 ) which "learns" the local noise level as the scanner moves through the
3. The Detection of Misfits 9
scene. Our experimental results suggest that the measure is able to detect model
misfit even if the real noise is not well modeled by the theory.
3.1. Theory. In the discussion that follows we will assume that we have at our
disposal a sequence of n s data sets S 0 ns of 3-D coordinates S
obtained by moving a laser range scanner along some trajectory through the
scene. The S j are not necessarily the original data scans, but are subsets picked out
by a segmentation algorithm as having come from the same convex surface. There is
also a finite chance that the segmentation algorithm has incorrectly partitioned the
data.
3.1.1. Known sensor noise. We will first consider the case for which the data noise
meets the conditions assumed by the fitting procedure, i.e. that the data is normally
distributed in a direction radially about the surface of the true model with zero mean
and known variance oe 2 .
For each step j in the sequence of views we find the model parameters "a j that
minimize the least squared error of the combined data sets S T
We
do this by iteratively minimizing the following functional,
is the implicit equation of the surface of a superellipsoid [18, 20].
Despite the nonlinearity of the model we will assume that a global minimum error
has been found and that the errors are small enough so we can linearize the model
and apply the well-known result from linear least squares theory - that an unbiased
estimate "
j of the true variance oe 2 can be found from the squared sum of the residuals
(which are measured by the D 4 metric),
where N j is the total number of data points and p is the number of parameters used
to fit the model (p = 11 for superellipsoids).
Unexpectedly large values of "
indicate that the residual errors are not solely due
to noise, and therefore give us grounds for believing that the model fits the data
badly. A simple strategy to detect misfit is to find those cases for which
where k v is a threshold used to decide whether models should be accepted or rejected.
On the Sequential Determination of Model Misfit
Because of random data noise it is impossible to find a value of k v that correctly
classifies the models in all situations, and we must learn to live with two types of
detection errors. The first of these, the Type I error, occurs when a model fits well
but chance variations increase the value of "
enough that the model is erroneously
rejected. The other, the Type II error, is the alternative; that a model fits the data
badly but random variations result in a reduction of "
large enough to cause the
model to be erroneously accepted. In general there is a tradeoff - larger values of
decrease the chance of Type I errors but increase the possibility of Type II errors.
It is possible to evaluate the Type I error. When a model fits the data and the
residuals are distributed normally, the statistic
is known to be sampled from a chi-squared distribution with degrees of freedom.
Thus the probability of a Type I error is
when the model fits
Graphically it is the area under the chi-squared probability distribution to the right
of (n \Gamma p) k v .
However it usually makes more sense to work the other way around; that is from the
probability distribution find the value of k v which gives a tolerable Type I error. The
level is often expressed in terms of a confidence level fl, or the probability of correctly
classifying the good models as good. Knowing that (7) follows a chi-squared law
we can find the point 2
fl;n\Gammap on the distribution for which the probability of a Type
I error is
reject models as misfits at the fl level of
confidence when
In contrast to the Type I error, it is very difficult to find the expected levels of
Type II error. The reason for this is that the Type II error is the probability that
given that the model does not fit the data. The number of different
data configurations that can lead to this situation is so huge, and the interaction
of the fitting algorithm to them so unpredictable, that it is impractical to find the
probability distribution of " oe 2
j that takes into account all the ways in which a model
can be misfitted.
3. The Detection of Misfits 11
3.1.2. Unknown sensor noise level. When the true level of data noise is unknown we
can use an estimate of it, provided that estimate is independent of the model fitting
process. One way to do this is to exploit repeated measurements. Suppose at some
stage during an experiment the laser beam has hit locations on surfaces of
the scene, and at each location we have made m i measurements. An estimate of the
variance, often called the pure estimate " oe 2
R
, is
R
where
is the mean value of the measured
surface coordinates at location i. If a model fits the data well "
computed
for the first data sets should be approximately equal . A lack-of-fit statistic
that uses the weighted difference of the two estimates relative to the pure estimate
is [2]
can be shown to be sampled from an F ratio distribution with
numerator and m R
denominator degrees of freedom. Models can be rejected at the fl
level of confidence when /
3.1.3. Consecutive Estimates of Variance. When repeated measurements cannot be
taken we propose that misfit can be detected by comparing consecutive estimates of
variance. If the model "a j \Gamma1 fitted to the first j data sets S 0
the estimated variance "
should be a valid estimate of data noise. If on the next
iteration the variance " oe 2
found after adding S j is significantly greater than "
have grounds for believing that the model cannot account for the additional data and
that it is therefore unacceptable. It is difficult however to evaluate the Type I errors,
and therefore to design a test at the appropriate level of confidence. One might think
that because " oe 2
are sampled from chi-squared distributions an F distribution
would correctly account for their ratio. Unfortunately, this relationship is true only if
the chi-squared distributions are independent. Because they share coordinates from
the first j data sets, such is obviously not the case.
To avoid any confusion, the index j is added to variable subscripts to indicate the sequential
order of data samples and their statistics, e.g. " oe 2
is the sample variance computed over the first
data sets.
12 On the Sequential Determination of Model Misfit
The approach we have taken is to minimize the dependency by using only the
residuals of the newly added points to estimate the data noise. First we compute " oe 2
in the usual way (5) from all of the data in the first j data sets S T
The other estimate of variance " oe 2
computed using only the data in S j , that is
where in this case n j is the number of data in S j , but the model "a j is the least squares
fit to all of the data S T
. Because the residuals are distributed normally then " oe 2
are sampled from a chi-squared distributions with
of freedom respectively. Therefore when the two variance estimates are independent
the incremental lack-of-fit statistic
is sampled from an F ratio distribution with n j numerator, and
degrees of freedom. Models can be rejected as misfits at the fl level of confidence when
However the /
should be used with caution because the estimate
of " oe 2
In effect some of the data variability is used to compute the
model parameters, and this loss results in an estimate of variance lower than it should
be. We compensate for the loss in (5) by dividing by that is p points have
been used up fitting the p model parameters. When we take only a subset of the data
as in (12) it is hard to arrive at an appropriate compensatory figure; mainly because
it is difficult to evaluate the relative influence exerted by the subset on the fit. The
lower bias of "
will be compensated to some degree by a narrower confidence interval
in an F distribution with a higher number of degrees of freedom so the overall
effect is probably minor and will in any case decrease as the number of data points
increases.
With the / L 3 metric, "
calculated over a more localized region of the surface.
Given that surface features causing misfit are most likely to be in the newly scanned
region then the mean squared residual error here will be higher than if it were computed
over the entire region so far scanned. The result is an apparent increase in
the metric's sensitivity to misfit error. However this sensitivity is offset by a higher
confidence threshold due to a lower number of degrees of freedom in the chi-squared
distribution of "
calculated from fewer data points.
3. The Detection of Misfits 13
An implicit assumption when using the / L 3 statistic is that "a j \Gamma1 is a valid fit, and
that " oe 2
is a valid estimate of the data noise level. By induction it must also be
true that the initial estimate "a 0 be a valid fit, so in practice it is up to us to select
the appropriate initial conditions which make sure that this is the case. Generally
this can be done without great difficulty, and with only a rough a-priori knowledge
of the scene being explored. For example by knowing the minimum size of objects in
the scene one could limit the initial scan to a small region, and validly fit it to the
surface of almost any large model (even a planar patch).
3.2. Simulation Experiments. In the experiments that follow we used a scene
synthesized from two superellipsoid models, a sphere and a cylinder both of 50mm
radius, but joined so as to blend smoothly and form an squat cylindric shape with
a spherically domed top (Figure 4). This shape was chosen because the overall
convexity of the surface ensures that it will not be partitioned by the segmentation
algorithms. Data collected from the top of the scene can be initially modeled with a
superellipsoid, but as the scanner moves from the top of the scene to a view of the
bottom the misfit increases, at first slowly as more of the cylindrical edge is exposed,
then abruptly when the flat bottom surface comes into view.
Range data is sampled from the scene using a computer simulation of the McGill-
NRC range scanner we have in our laboratory [17]. The camera is always directed
so that its line of sight is towards the origin of the scene coordinate system, but is
allowed to move around on the surface of a view sphere of radius %, also centered on
the scene origin. Camera position is specified by a latitude and longitude (#; ') set
up with respect to the scene coordinate frame such that the positive Z axis intersects
the view sphere at its north pole so that the X-Z plane cuts the view
sphere around the meridian of zero longitude. Our scanner uses two mirrors to sweep
the laser beam over a field of view 36:9 ffi by 29:2 ffi along the camera's X and Y axes
respectively. In both cases the mirror angles are controlled by an index between 0
and 256 that divides the field of view into equi-angular increments. The sampling is
specified by two triples of numbers fi min where the X
mirror is moved from index i min to i max in steps of i inc , and likewise for the Y mirror
and the j indices. If a mirror is not moved, for example when only a single scan line
is taken, then the redundant maximum and incremental values will be dropped. We
call the array of data collected by scanning the X and Y mirrors a range image.
An advantage of a simulated range scanner is that it is very easy to implement
and investigate the effect of different noise models. For these experiments we will
use a radial noise model in which normally distributed noise (i.e. Gaussian noise)
is added so as to displace the data point from the surface in a direction radial to
14 On the Sequential Determination of Model Misfit20
-2525a) We use a scene composed of two superel-
lipsoid models, a sphere and a cylinder,
joined to make a smooth transition. Although
the compound model is convex it
cannot be described by a single superellip-
soid surface. The scene above is as seen
from a view sphere latitude of about \Gamma20 ffi .
b) A typical sequence of data. The dots mark
the data points, and the lines show the
direction of the scan lines, which are doubled
to obtain repeat measurements. A
radial noise model used.
Figure
4. The 3-D Scene and Data used in these experiments
the model's center. This noise model matches the assumptions upon which the least
squares minimization is based, and therefore those of the tests that detect misfit.
Unless otherwise stated all of the following experiments will be performed with
sets of data collected in the following way. Initially the scanner is moved to
on a view sphere of radius and the scene is twice sampled coarsely
(f0; 256; 64g \Theta f96; 160; 16g) to give 50 points (a repeated 5 \Theta 5 range image). The
field of view is such that the scanner sees only the spherical surface. After an initial
scan, additional data from a sequence of views is taken by moving the camera along
the meridian of 0 ffi longitude in 10 ffi increments of latitude until it reaches the south
pole. At each position a single line of data (f0; 256; 32g \Theta f128g) is twice scanned
to give a set usually containing 10 points (a repeated 5 \Theta 1 image). The result is
a sequence of 19 data sets ordered according to the latitude, so S 0 is the initial set
collected from the north pole, and S is collected from the south pole. In every
3. The Detection of Misfits 15
data set there are repeat samples of each point, so we can evaluate all 3 lack-of-fit
statistics under exactly the same conditions. Figure 4 shows a 3-D rendition of a
typical sequence of scans with added radial noise
The first set of experiments was designed to evaluate the performance of the three
lack-of-fit measures for radially distributed noise, i.e noise in agreement with the assumptions
upon which the lack-of-fit statistics are based. A large number of trials
were performed at two different noise levels: one about twice that typically observed
in our laboratory and the other observed when sampling
from the limits of the scanner's range 2000). In each trial a sequence
of data was obtained by moving the scanner in 10 ffi steps as described above.
At each step the three lack-of-fit statistics were evaluated, and accumulated into the
corresponding histogram at that step. On completion we obtained a sequence of histograms
showing the progression of each lack-of-fit statistic as the scanner discovered
the model surface while moving from the top to the bottom of the scene. The results
are shown in Figure 5.
We obtain the theoretically expected results for viewsphere latitudes from 90 ffi
down to 0 ffi . Here the scanner is just sampling the surface of the sphere, so a valid
superellipsoid fit can be obtained. The histograms indicate that for the /
statistics approximately 1% of the trials exceed the 99% confidence level, and that the
histogram value is close to the expected value of 1%. The misfit level is somewhat
lower than expected for the /
statistic, and the histogram peak is also displaced
downward. As mentioned in the theoretical discussion, an effect like this could be
due to overestimation of the degrees of freedom when calculating "
Only 5 data
points were used in these computations, so an additional degree of freedom would
cause a significant decrease in the value of / L 3 .
For latitudes below 0 ffi there is a gradual rise in the rate of misfits, until by \Gamma40 ffi
almost all of the trials are classified as such. This behaviour also matches that
expected, with the slow increase marking the transition region where it becomes
increasingly difficult to describe the surface shape as superellipsoid, and the abrupt
change indicating gross violations of the assumed symmetry.
The /
statistic is not as sensitive as the other two in detecting misfit, and again
we would expect that behaviour. Because it compares variance estimates at adjacent
latitudes, the / L 3 statistic is really detecting incremental increase in misfit, and can
therefore be fooled when the misfit is increased slowly. Another way of looking at
this is to think of / L 3 as adapting to "learn" the noise. Thus we see that unlike /
the / L 3 statistic does not reject fits at latitudes # ! \Gamma60 ffi because it has adapted
itself to the very high levels of "
j found at
On the Sequential Determination of Model Misfit
a)
1.% 0.9%
0.9%
1.%
0.9%
0.7%
0.7%
0.9%
1.3%
95.1%
100%
100%
100%
100%
100%
100%
1.3%
0.9%
1.%
1.%
0.9%
0.7%
0.5%
1.%
91.9%
100.%
100%
100%
100%
100%
100%
0.1%
0.5%
77.2%
84.4%
100%
100%
100%
80.3%
1.3% 1.2%
0.7%
0.5%
0.9%
0.7%
0.7%
0.9%
0.7%
0.7%
2.4%
14.6%
41.5%
98.9%
99.8%
99.9%
100%
1.2%
1.2%
1.5%
1.1%
1.1%
1.%
0.9%
0.7%
0.9%
0.9%
1.1%
3.8%
13.6%
97.4%
99.%
99.5%
99.8%
0.1%
0.1%
0.1%
2.6%
7.5%
93.5%
80.9%
34.2%
Histograms are rendered radially at their corresponding view sphere latitude. Each bin is
coloured a level of grey in proportion to the number of values falling within it. The number of
trials used to compute the histograms is shown in parentheses above each figure.
Each histogram has been computed by dividing the theoretical one-sided 99% confidence
interval (shown underneath the histograms) into 11 bins. The first 10 bins split the 95%
interval up into equal parts, while the remaining one shows the other 4%. Values exceeding
99% confidence threshold are all accumulated into the outer bin and the percentage falling here
is indicated beside it. When the model fits we would expect this figure to be 1%. The dotted
circle marks a lack-of-fit statistic value of 1:0. It should coincide with the histogram maximum.
Figure
5. Comparison of / L 1 , /
model with noise levels of (a) 1mm and (b) 4mm.
3. The Detection of Misfits 17
At the higher noise level three statistics are close to the limits of
their ability to discriminate, and only the gross misfit is detected. In fact the noise
level is so high that it is starting to obscure the viewer's perception of the corner of
the cylinder in the profile data. Noise of this level would not be encountered on our
scanner except when measuring surfaces at the limits of its range.
3.3. Real experiments. The simulations confirm the correctness of theory but
to what extent is this true when using real scanners for which the theoretical noise
models are only an approximation? To test this we have used the apparatus shown
in
Figure
6 to perform the same experiments but with real data. The apparatus
consists of the McGill-NRC scanner mounted in a fixed position with a view of an
object clamped to the rotational axis of a small stage. Different parts of object can
be scanned by using stepper motors to rotate the stage about two orthogonal axes.
Before the experiment begins a calibration procedure is run to determine the orientation
and position of the two rotational axes. Once known, the angles of rotation
can be used to map scanner range coordinates into a scene frame attached to the
rotating object.
Figure
6. Rotary stage used in the real experiments showing the compound
model comprised of a smoothly joined cylinder and block
On the Sequential Determination of Model Misfit
A side effect of the calibration procedure is that it provides us with an estimate of
the sensor noise oe. The axes are found by measuring, at several different rotations,
the orientation of an inclined plane attached to the stage. For each orientation we
can estimate the sensor noise from the residual errors left after fitting a plane to the
scanned data. Figure 7 shows that oe varies with orientation, that it depends mainly
on the angle the plane makes with the scan direction, and that it is minimum when
the surface is normal to the scanner's line of sight. It is well known that oe also varys
with the distance to the surface, that it depends on surface properties, and that it
can change with time. In general these factors make it very difficult to choose a
constant value of oe demanded by the misfit statistics, but for these experiments we
have taken the average minimum value over several calibration runs
Our choice is motivated by the fact that most of the data are taken from surfaces
normal to the scan beam, and that the distance to the surface is approximately that
of the calibration plane.
Angle
Y Angle0.180.22Std Dev
Angle
Figure
7. Sensor noise as a function of surface orientation. The figure
shows oe as a function of the angle between surface to the scanner's X
and Y axes. Most of the variation is due to surface slope in the direction
of the scan line (the X direction).
The results of the first experiment are shown in Figure 8. The procedure used was
essentially the same as that described in Section 3.2, though we used the smoothly
3. The Detection of Misfits 19
2.6% 2.2% 1.7%
1.7%
1.7%
1.9%
1.7%
1.7%
1.9%
2.%
2.4%
4.3%
100%
100%
100%
100%
100%
100%
26.8%
28.1%
26.6%
30.4%
36.7%
43.9%
50.2%
54.7%
99.4%
100%
100%
100%
100%
100%
100%
1.1%
0.9%
0.9%
2.4%
2.1%
2.4%
48.9%
99.8%
100%
100%
3.4%
2.1%
Figure
8. Comparison of /
tests for real data
obtained using McGill-NRC rangefinder.
joined cylinder and block shown in Figure 6 because it was easier to fabricate than the
spherically capped cylinder. The sampling was also changed to take into account the
different configuration, and to prevent inclusion of points not on the object's surface.
The results were accumulated from 536 trials. It took approximately 2 minutes for
each trial and around 20 hours to collect the complete data set. In general the results
indicate that the / L1 statistic overestimates the amount of misfit slightly, that the / L2
statistic is in gross error, but that the / L3 statistic still behaves very much as predicted
by the theory.
The qualitative behaviour of the /
L1 lack-of-fit statistic matches that in the simu-
lations, except the percentage of trials exceeding the 99% confidence level is about
twice that expected (1.7%-2.8% or 9-13 trials). The cause of this discrepancy is indicated
in Figure 9 where we show a histogram of the residual errors left after fitting
a superellipsoid to a patch of range data scanned from the cylindrical part of the surface
Figure
6). When compared to the normal distribution with standard deviation
oe computed from (5) we observe that the residuals depart from the assumption of
normality: there is asymmetry, and the tails of the histogram are somewhat thicker
than expected when compared with the width of the peak. One has the impression
that the distribution is composed of two or more normal distributions with different
20 On the Sequential Determination of Model Misfit
variances and offset means, which is the kind of effect expected due to the variation
of oe with surface orientation and distance. In addition we observed an overall upward
drift in the residual errors over the 20 hour duration of the experiment, indicating
that the actual sensor noise worsened during this period. The net result is that the
values of " oe obtained from the residual errors are greater than the assumed sensor
noise, so the values of the / L1 statistic are higher than expected.
ex4/cylblk.rids N=2048 Mean=-0.016 StdDev=0.3250100150
Figure
9. Histogram of residual errors left after fitting a superellip-
soid to cylindrical data. The solid line shows the normal distribution
with the same mean as the residual errors and with a standard deviation
oe computed using equation (5). The dotted line indicates the
normal distribution assumed for a sensor noise level of
The / L2 statistic performs very badly, with the number of trials exceeding the 99%
confidence level at around 30 times that expected for a good fit. The reason for
the poor performance is that the errors in the repeat data sets are not independent
as demanded by the theory. In Figure 10, where we show 8 successive scans of the
same patch of surface, it can be seen that there is a noticeable amount of coherency
from scan to scan. For example there are similar patterns of variation in scans 4,
for the first 15 mirror positions, and in scans 6, 7, & 8 for the last 12.
As a result the noise " oe R
estimated by looking at the differences between successive
scans will be significantly less than the variation along a scan, and since the latter
is effectively the residual variation left after fitting it will look like misfit to the / L3
lack-of-fit statistic. The reason for the repeatability in the "noise" from scan to
scan is not exactly known, but we have seen it in other laser range scanners as well.
One possibility is that it is caused by speckle interference induced when the laser
beam passes through the scanner's optics. However even if this kind of noise was
3. The Detection of Misfits 21
not present in the sensor, exactly the same problem would arise if the surface was
roughly textured or patterned. We must conclude that the /
L2 statistic will only be
useful in very specific circumstances.
1mm
Figure
10. Repeated Scans. The figure shows 8 sequential scans
taken approximately 1 second apart from exactly the same place on
the cylindrical part of the surface used in the experiments. The scans
have been offset from each other and a plotted horizontally as a function
of the scanner's X mirror index. The vertical scale of each scan is
indicated by the 1 mm bar on the left.
In comparison the /
L3 statistic still behaves as expected, even though the scanner
noise characteristics depart from the underlying theoretical assumptions. In fact
the results seem to match the theory better than those obtained in the simulations
(5) where we observed a lower than expected number of trials exceeding the 99%
confidence interval. A possible reason for this is that in the real experiments over
twice as many points (12 vs 5) were taken in each scan line so the /
L3 statistic will
be less sensitive to underestimation of degrees of freedom.
A curious point, and one which highlights a limitation of the /
L3 statistic, is the dip
in misfit at a latitude of . If this feature is statistically significant (1% represents
only 5 trials in this experiment) the interpretation is that the data obtained at this
latitude fits the current model better than the data from all the higher latitudes.
That can happen when the measured surface is not exactly superellipsoidal (e.g.
because of small errors in the stage calibration). The fitted surface would position
itself to minimize the residual error so some parts of it would be inside the measured
surface and some parts outside. If the last scan happens to fall near the place the
fitted and measured surfaces cross, then the residual errors for it will be lower than
22 On the Sequential Determination of Model Misfit
average resulting in a low value of / L3. We cannot expect the /
L3 statistic to detect
slow departures from the valid class of models, but we can expect it to function well
when changes are abrupt (e.g. segmentation errors).
4. An Example
Figure
1 illustrates a scenario which typifies the misfit problem - the jointed
right arm of the mannequin has been described using a single model, rather than
two as one would expect. In this particular situation segmentation should take place
along the local concave creases marking the join between the upper and lower arm.
However the discrete sampling of the scanner has "skipped" over the fine detail of the
elbow joint. A crease is detected around the elbow, but it is not continuous enough
to completely sever the arm data into two surface patches. It can be argued that
a more detailed analysis could handle this situation, e.g. [9-11, 24], but there will
always be times when it is just not possible to segment smoothly joined, articulated
objects at such a low level. Consider the out-stretched human arm - how is the
boundary that separates it into the upper and lower arms precisely delineated?
Instead we have to rely on more global models of the surface to provide additional
clues as to when data should be partitioned. Model misfit is one such clue, and
where it occurs may, under the right conditions, indicate good places to re-partition.
However for the arm of the mannequin there is no clue that anything is wrong - the
surface and the scanner have conspired to produce unsegmented data that can be fit
very well by a superellipsoid model. Only by collecting more data can the structure
of the mannequin be correctly inferred and resolved, which brings us back to the gaze
planning strategy described in Section 2.
Recall that the strategy operates by directing the scanner to that position on
the surface of the current model that exhibits highest uncertainty, or in the case of
incremental planning, to a position along the direction of the uncertainty gradient
[23]. According to the theory we expect that when data collected at the new sensor
position are added to the model " oe will not increase by any significant degree. This can
be confirmed by applying an appropriate lack of fit statistic (Table 1). In the event
that misfit is detected, further data acquisition can be inhibited until the problem is
resolved, e.g. by re-applying the segmentation algorithm to the composite data set.
Another object which can cause the gaze planning strategy to fail is the small owl
shown in Figure 11. The problem here is that the crease separating the head of the
owl from its body does not completely encircle the neck (Figure 12 top). If the initial
data is taken from the back of the owl (Figure 12 bottom) a single model is fit to
both the head and the body but the strategy will cause the scanner to move towards
4. An Example 2310-55
a b
Figure
11. The owl. a) View of the owl mounted in the rotary stage.
b) A typical sequence of scans collected from a band encircling the
region around the owl's neck.
the front of the owl where two models are more appropriate. We investigated the
behaviour of the /
L3 statistic in this situation by mounting the owl in the stage so
data could be collected from the smooth portion of the back. The initial model fit
was cylindrical, though both the /
L1 and / L2 statistics rejected it outright. The initial
misfit is unsurprising given that the soapstone surface is roughly textured, and that
the back is slightly concave. A sequence of single line scans was collected by rolling
the owl's body over until it faced the scanner - the direction predicted as being
the quickest way to improve knowledge of the model surface according to the gaze
planning techniques discussed in Section 2.
A typical set of scans is shown in Figure 11b and the / L3 lack-of-fit histograms in
Figure
13. The scale of the histogram has been expanded (the confidence interval is
99.99999%) to reveal the pattern of change even when the misfit is large. Initially
the value of the statistic stays below the 99% confidence level but rises rapidly as
soon as data are scanned from part of the owl's wing at a latitude of 40 ffi . After
this the statistic starts to adapt to the variation exhibited by the wing parts until
by 0 ffi the misfit levels have almost dropped back to normal. The abrupt jump at
occurs when the scanner encounters the crease around the owl's neck, but the
On the Sequential Determination of Model Misfit
Figure
12. Two views of the owl. Depending on whether it is viewed
from the front (top view) or from the back (bottom view), the owl can
be represented by either two models or a single model respectively.
statistic adapts to this change as well, falling to near normal levels by the time the
face is fully in view.
As can be seen by examining the trace of histogram peaks in Figure 13, the
L3 statistic provides a stable indication of misfit errors associated with the surface
boundaries that would normally be determined by segmentation. In practice
we have found close agreement between misfit indications based on the /
L3 statistic
and empirically determined modeling errors observed in our laboratory system.
The assumptions regarding the use of the /
L3 and the other lack-of-fit statistics are
summarized below in Table 1.
5. Discussion and Conclusions 25
Assumption / L1 / L2 /
Sensor noise is normally distributed 3 yes yes yes
Sensor noise level oe known yes no no
Sensor noise level is constant 4 yes yes weakly
Residual errors due only to sensor noise yes yes no
Residual errors spatially independent 5 yes yes yes
Residual errors temporally independent 6 yes yes no
Repeat measurements available no yes no
no no yes
Table
1: Assumptions used in the different lack-of-fit statistic
5. Discussion and Conclusions
The results we obtain match those our intuition leads us to expect. Perhaps this is
better illustrated by considering the analogy of an archaeologist who has discovered
a object shaped as above but with only the top of the joint protruding from the sand.
So great is the antiquity of this object that the original surface detail has eroded, and
the discoverer can only guess at its true nature. Initially it appears to be the top of
a container of unusual design, perhaps a burial casket, but only further excavation
will tell. From the exposed shape the object looks significantly longer than it is wide,
and it will therefore be more economical to begin digging down the objects side. This
is done and as the excavation proceeds the initial expectations are confirmed - the
object still appears to be a casket. However at some depth further digging suddenly
reveals a concavity in the objects surface so pronounced that the archaeologist is
forced to drop the casket hypothesis and consider others.
3 In practice the assumption of normality can be weakened. The factor of real importance is that
the cumulative lack-of-fit distribution is accurate at the chosen confidence level, because we can
then make accurate predictions about the expected rate of misfit due to random chance.
4 Constancy of the noise can also be weakened in practice, particularly with the / L3 statistic.
5 By spatially independent we mean that errors at different surface locations do not depend on each
other. It is not strictly necessary that this be the case, for example the errors could be Markovian
provided the scale of interaction is much smaller than the spatial extent of the measurements.
6 By temporally independent we mean that the errors from exactly the same surface location at
different times are independent. For example, the residual errors resulting from a rough surface are
not temporally independent.
26 On the Sequential Determination of Model Misfit
Thus it is with the arm of the mannequin and the back of the owl. Initially the
laser scanner exposes only a part of the surface so our knowledge of the global shape
is extremely uncertain. To resolve this uncertainty we must explore, and to guide our
exploration we need an initial hypothesis - that the shape is superellipsoid. However
we must always be on guard lest that hypothesis fail. This is the role of the test
for misfit - to tell us to reconsider, either by choosing a different hypothesis or by
re-examining the data. We are particularly interested in the latter scenario because
it is common in an active vision context. Very often we have strong prior knowledge
about the appropriate model to use for a given task, but fail because the data used
to fit the model is wrong, e.g. segmentation errors.
Can we gain any insight into the nature and location of such errors from the exploration
procedure? This would be of obvious advantage to a backtracking procedure.
In general the answer appears to be no. While we can determine the exact point
at which the model fails, we still cannot ascertain whether this is due to the data
already collected or to the data newly acquired. In the case of failures due to partitioning
errors, our only alternative thus far is to go back and re-sample the data at
higher precision such that the segmentation algorithm [6, 10,11] has a better chance
of detecting the missing boundary.
In this paper we have outlined a framework for this process of what we call autonomous
exploration. We have shown that by using the current estimate of a model
to predict the locations of surfaces in yet to be explored regions of a scene, we can
both improve estimates of model parameters as well as validate its ability to describe
the scene. Knowing when we are wrong is not sufficient. In an unstructured environment
an autonomous system must act to correct that wrong either by selecting
a more appropriate model or by re-interpreting the data in light of cues provided
by the failure of the model. These topics are currently under investigation in our
laboratory.
--R
Segmentation through variable-order surface fitting
Model discrimination for nonlinear regression models.
Shading flows and scenel bundles: A new approach to shape form shading.
Probability and statistics for the engineering
Darboux frames
SNAKES: active contour models.
Describing complicated objects by implicit polyno- mials
Toward a computational theory of shape: An overview.
Finding the parts of objects in range images.
Partitioning range images using curvature and scale.
Introduction to the Theory of Statistics.
Closed form solutions for physically based shape modelling and recognition.
Computational vision and regularization theory.
Numeric Recipes in C - The Art of Scientific Computing
Laser range finder based on synchronized scanners.
Recovery of parametric models from range images: The case for superquadrics with global deformations.
Dynamic 3D models with local and global deformations: Deformable superquadrics.
From uncertainty to visual exploration.
Uncertain views.
Autonomous exploration: Driven by uncertainty.
Autonomous exploration: Driven by uncertainty.
The Organization of Curve Detection: Coarse Tangent Fields and Fine Spline Coverings.
--TR
--CTR
Francesco G. Callari , Frank P. Ferrie, Active Object Recognition: Looking for Differences, International Journal of Computer Vision, v.43 n.3, p.189-204, July/August, 2001
Francesco G. Callari , Frank P. Ferrie, Active Object Recognition: Looking for Differences, International Journal of Computer Vision, v.43 n.3, p.189-204, July/August, 2001
Tal Arbel , Frank P. Ferrie, On the Sequential Accumulation of Evidence, International Journal of Computer Vision, v.43 n.3, p.205-230, July/August, 2001 | lack-of-fit statistics;active vision;autonomous exploration;misfit |
263452 | Approximating Bayesian Belief Networks by Arc Removal. | AbstractI propose a general framework for approximating Bayesian belief networks through model simplification by arc removal. Given an upper bound on the absolute error allowed on the prior and posterior probability distributions of the approximated network, a subset of arcs is removed, thereby speeding up probabilistic inference. | Introduction
Today, more and more applications based on the Bayesian belief network 1 formalism are
emerging for reasoning and decision making in problem domains with inherent uncertainty.
Current applications range from medical diagnosis and prognosis [1], computer vision [10], to
information retrieval [2]. As applications grow larger, the belief networks involved increase
in size. And as the topology of the network becomes more dense, the run-time complexity of
probabilistic inference increases dramatically, reaching a state where real-time decision making
eventually becomes prohibitive; exact inference in general with Bayesian belief networks has
been proven to be NP-hard [3].
For many applications, computing exact probabilities from a belief network is liable to be
unrealistic due to inaccuracies in the probabilistic assessments for the network. Therefore, in
general, approximate methods suffice. Furthermore, the employment of approximate methods
alleviates probabilistic inference on a network at least to some extend. Approximate methods
provide probability estimates either by employing simulation methods for approximate
first introduced by Henrion [7], or through methods based on model simplification,
examples are annihilating small probabilities [8] and removal of weak dependencies [13].
With the former approach, stochastic simulation methods [4] provide for approximate
inference based on generating multisets of configurations of all the variables from a belief
network. From this multiset, (conditional) probabilities of interest are estimated from the
occurrence frequencies. These probability estimates tend to approximate the true probabilities
Part of this work has been done at Utrecht University, Dept. of Computer Science, The Netherlands.
1 In this paper we adopt the term Bayesian belief network or belief network for short. Belief networks are
also known as probabilistic networks, causal networks, and recursive models.
if the generated multiset is sufficiently large. Unfortunately, the computational complexity of
approximate methods is still known to be NP-hard [5] if a certain accuracy of the probability
estimates is demanded for. Hence, just like exact methods, simulation methods have an
exponential worst-case computational complexity.
As has been demonstrated by Kjaerulff [13], forcing additional conditional independence
assumptions portrayed by a belief network provides a promising direction towards belief net-work
approximation in view of model simplification. However, Kjaerulff's method is specifically
tailored to the Bayesian belief universe approach to probabilistic inference [9] and model
simplification is not applied to a network directly but to the belief universes obtained from
a belief network. The method identifies weak dependencies in a belief universe of a network
and removes these by removing specific links from the network thereby enforcing additional
conditional independencies portrayed by the network. As a result, a speedup in probabilistic
inference is obtained at a cost of a bounded error in inference.
In this paper we propose a general framework for belief network approximation by arc
removal. The proposed approximation method adopts a similar approach as Kjaerulff's
method [13] with respect to the means for quantifying the strength of arcs in a network
in terms of the Kullback-Leibler information divergence statistic. In general, the Kullback-Leibler
information divergence statistic [14] provides a means for measuring the divergence
between a probability distribution and an approximation of the distribution, see e.g. [22].
However, there are important differences to be noted between the approaches. Firstly, the
type of independence statements enforced in our approach renders the direct dependence relationship
portrayed by an arc superfluous, in contrast to Kjaerulff's method where other links
may be rendered superfluous as well. As a consequence, we apply more localized the changes
to the network which allows a large set of arcs to be removed simultaneously. Secondly, as
has been mentioned above, Kjaerulff's method operates only with the Bayesian belief universe
approach to probabilistic inference using the clique-tree propagation algorithm of Lauritzen
and Spiegelhalter [16]. In contrast, the framework we propose operates on a network directly
and therefore applies to any type of method for probabilistic inference. Finally, given an
upper bound on the posterior error in probabilistic inference allowed, a (possibly large) set of
arcs is removed simultaneously from a belief network requiring only one pre-evaluation of the
network in contrast to Kjaerulff's method in which conditional independence assumptions are
added to the network one at a time.
The rest of this paper is organized as follows. Section 2 provides some preliminaries from
the Bayesian belief network formalism and introduces some notions from information theory.
In Section 3, we present a method for removing arcs from a belief network and analyze the
consequences of the removals on the represented joint probability distribution. In Section 4,
some practical approximation schemes are discussed, aimed at reducing the computational
complexity of inference on a belief network. To conclude, in Section 5 the advantages and
disadvantages of the presented method are compared to other existing methods for approximating
networks.
Preliminaries
In this section we briefly review the basic concepts of the Bayesian belief network formalism
and some notions from information theory. In the sequel, we assume that the reader is well
acquainted with probability theory and with the basic notions from graph theory.
2.1 Bayesian Belief Networks
Bayesian belief networks allow for the explicit representation of dependencies as well as independencies
using a graphical representation of a joint probability distribution. In general,
undirected and directed graphs are powerful means for representing independency models,
see e.g. [21, 22]. Associated with belief networks are algorithms for probabilistic inference
on a network by propagating evidence, providing a means for reasoning with the uncertain
knowledge represented by the network.
A belief network consists of a qualitative and a quantitative representation of a joint
probability distribution. The qualitative part takes the form of an acyclic digraph G in which
each vertex represents a discrete statistical variable for stating the truth of a
proposition within a problem domain. In the sequel, the notions of vertex and variable are used
interchangeably. Each arc in the digraph, which we denote as
called the tail of the arc, and vertex called the head of the arc, represents a direct causal
influence between the vertices discerned. Then, vertex V i is called an immediate predecessor
of vertex V j and vertex V j is called an immediate descendant of vertex V i . Furthermore,
associated with the digraph of a belief network is a numerical assessment of the strengths of
the causal influences, constituting the quantitative part of the network.
In the sequel, for ease of exposition, we assume binary statistical variables taking values
in the domain ftrue; falseg. However, the generalization to variables taking values in any
finite domain is straightforward. Each variable V i represents a proposition where
is denoted as v i and false is denoted as :v i . For a set of variables V , the conjunction
is called the configuration scheme of V ; a configuration
c V of V is a conjunction of value assignments to the variables in V . In the sequel, we use
the concept of configuration scheme to denote that a specific property holds for all possible
configurations of a set of variables.
Definition 2.1 A Bayesian belief network is a tuple
is an acyclic digraph with
(G)g is a set of real-valued functions
\Theta fC G
called assessment functions, such that for each configuration
of the set G (V i ) of immediate predecessors of vertex V i we have that
A probabilistic meaning is assigned to the topology of the digraph of a belief network by
means of the d-separation criterion [18]. The criterion allows for the detection of dependency
relationships between the vertices of the network's digraph by traversing undirected paths,
called chains, comprised by the directed links in the digraph. Chains can be blocked by a set
of vertices as is stated more formally in the following definition.
Definition 2.2 Let be an acyclic digraph. Let be a chain in G. Then
is blocked by a set of vertices contains three consecutive vertices
which one of the following three conditions is fulfilled:
are on the chain and
are on the chain and
are on the chain and oe
the set of vertices composed of X 2 and all its descendants.
Note that a chain is blocked by ; if and only if contains . In this
case, vertex X 2 is called a head-to-head vertex with respect to [6].
Definition 2.3 Let be an acyclic digraph and let X, Y , Z ' V (G) be
disjoint subsets of vertices from G. The set Y is said to d-separate the sets X and Z in G,
denoted
G , if for each every chain from V i to V j in G is blocked
by Y .
The d-separation criterion provides for the detection of probabilistic independence relations
from the digraph of a belief network, as is stated more formally in the following definition.
Definition 2.4 Let be an acyclic digraph. Let Pr be a joint probability
distribution on V (G). Digraph G is an I-map for Pr if hX j Z j Y i d
G implies X?? Pr Y j Z for
all disjoint subsets X, Y , Z ' V (G), i.e. X is conditionally independent of Z given Y in Pr.
By the chain-rule representation of a joint probability distribution from probability theory,
the initial probability assessment functions of a belief network provide all the information
necessary for uniquely defining a joint probability distribution on the set of variables discerned
that respects the independence relations portrayed by the digraph [11, 18].
Theorem 2.5 Let (G; \Gamma) be a belief network as defined in Definition 2.1. Then,
Y
defines a joint probability distribution Pr on V (G) such that G is an I-map for Pr.
A belief network therefore uniquely represents a joint probability distribution. For computing
(conditional) probabilities from a network, several efficient algorithms have been developed
from which Pearl's polytree algorithm with cutset conditioning [18, 19] and the method of
clique-tree propagation by Lauritzen and Spiegelhalter [16] (and combinations [20]) are the
most widely used algorithms for exact probabilistic inference. Simulation methods provide
for approximate probabilistic inference, see [4] for an overview.
2.2 Information Theory
The Kullback-Leibler information divergence [14] has several important applications in sta-
tistics. One of which is for measuring how well one joint probability distribution can be
approximated by another with a simpler dependence structure, see e.g. [22]. In the sequel,
we will make extensive use of the Kullback-Leibler information divergence. Before defining
the Kullback-Leibler information divergence more formally, the concept of continuity is
introduced [14].
Definition 2.6 Let V be a set of statistical variables and let Pr and Pr 0 be joint probability
distributions on V . Then Pr is absolutely continuous with respect to Pr 0 over a subset of
variables denoted as Pr Pr 0 k X, if Pr(c X
configurations c X of X.
We will write Pr Pr 0 for Pr Pr 0 k V for short. Note that the continuity relation is
a reflexive and transitive relation on probability distributions. Furthermore, the continuity
relation satisfies
ffl if Pr Pr 0 k X, then Pr Pr 0 k Y for all subsets of variables X, Y ' V with Y ' X;
subsets of variables X, Y ' V
and each configuration c Y of Y with
That is, if a joint probability distribution Pr is absolutely continuous with respect to a distribution
Pr 0 over some set of variables X, then Pr is also absolutely continuous with respect to
Pr 0 over any subset of X. In addition, any posterior distribution
configuration c Y of Y is also absolutely continuous with respect to the posterior distribution
Definition 2.7 Let V be a set of statistical variables and let X ' V . Let Pr and Pr 0 be joint
probability distributions on V . The Kullback-Leibler information divergence or cross entropy
of Pr with respect to Pr 0 over X, denoted as I(Pr; Pr 0 ; X), is defined as
In the sequel, we will write short. Note that the information
divergence is not symmetric in Pr and Pr 0 and is finite if and only if Pr is absolutely continuous
with respect to Pr 0 . Furthermore, the information divergence I satisfies
subsets of variables X ' V , especially
only if Pr(CX
subsets of variables
subsets of variables X, Y ' V if
and Y are independent in both Pr and Pr 0 .
In principle, the base of the logarithm for the Kullback-Leibler information divergence is
immaterial, providing only a unit of measure; in the sequel, we use the natural logarithm.
With this assumption the following property holds.
Proposition 2.8 Let V be a set of statistical variables and let Pr and Pr 0 be joint probability
distributions on V . Furthermore, let I be the Kullback-Leibler information divergence as
defined in Definition 2.7. Then,
for all X ' V .
Hence, the Kullback-Leibler information divergence provides for an upper bound on the
absolute divergence jPr(c X configurations c X of X, a property of the
Kullback-Leibler information divergence known as the information inequality [15].
r
s
CC appoximation CTP appoximation
Figure
1: Reducing the complexity of cutset conditioning (CC) and clique-tree propagation (CTP)
by removing arc
3 Approximating a Belief Network by Removing Arcs
In this section we propose a method for removing arcs from a belief network and we investigate
the consequences of the removal on the computational resources and the error introduced. For
ease of exposition, a method for removing a single arc from a belief network is introduced first.
Then, based on this method and the observations made, a method for multiple simultaneous
arc removals is presented.
3.1 Reducing the Complexity of a Belief Network by Removing Arcs
The computational complexity of exact probabilistic inference on a belief network depends to
a large extend on the connectivity of the digraph of the network. Removing an arc from the
digraph of the network may substantially reduce the complexity of probabilistic inference on
the network. For Pearl's polytree algorithm with the method of cutset conditioning [18, 19],
undirected cycles, called loops [18], can be broken resulting in smaller loop cutsets to be
used. The size of the cutset determines the computational complexity of inference on the
network to a large extend. For the method of clique-tree propagation [16], a belief network is
first transformed into a decomposable graph. Here, the computational complexity of inference
depends to a large extend on the size of the largest clique in the decomposable graph. Removal
of an appropriate arc or edge results in splitting cliques into several smaller cliques, see e.g.
the method of Kjaerulff [13], yielding a reduction in computational complexity of inference
on the decomposable graph.
In
Figure
1 we have depicted the effect of removing an arc from the digraph of a belief
network for the method of cutset conditioning and for the method of clique-tree propagation.
For cutset conditioning, a vertex in the cutset (e.g. the vertex drawn in shading) is required
to break the loop. Since removal of arc breaks the loop, a smaller cutset may be
necessary. For clique-tree propagation, the decomposable graph obtained from the example
belief network has three cliques, each with 4 vertices. Removal of arc results in a
decomposable graph with four smaller cliques, one with 2 and three with 3 vertices.
For approximate methods, the computational complexity of for example forward simulation
[4] depends to some extend on the distance from a root vertex to a leaf vertex. Therefore,
the removal of arcs may also yield a reduction in the complexity of approximate inference.
However, it is more difficult to analyze and measure the amount of reduction in complexity
in general in comparison to exact methods and in the sequel we will discuss arc removal in
view of exact methods for probabilistic inference.
3.2 Removing an Arc from a Belief Network
Although several methods for removing an arc from a belief network can be devised, the
method for removal of an arc as defined in the following definition is the most natural choice.
This will be made clear when we analyze the effects of the removal.
Definition 3.1 (G; \Gamma) be a belief network and let Pr be the joint probability distribution
defined by B. Let V r be an arc in G. We define the tuple B
is the acyclic digraph with V (G Vr6!Vs
(G)g is the set of functions
with
Note that network B resulting after removal of an arc V r
from the digraph G of a belief network B, again constitutes a belief network. In this network
the assessment functions for the head vertex of the arc are changed only. In the sequel, we
will refer to B Vr6!Vs as the approximated belief network after removal of arc and the
operation of computing B Vr6!Vs will be referred to as approximating the network.
Removal of an arc from a belief network may result in a change of the represented joint
probability distribution. However, the represented dependency structure of the distribution
portrayed by the graphical part of the network may be retained by introducing a virtual arc
between the two vertices for which a physical arc is removed. A virtual arc may serve for the
detection of dependencies and independencies in the original probability distribution using
the d-separation criterion. A virtual arc, however, is not used in probabilistic inference, still
allowing for a faster, approximate computation of prior and posterior probabilities from the
simplified network.
3.3 The Error Introduced by Removing an Arc
Removing an arc from a belief network yields a (slightly) simplified network that is faster
in inference but exhibits errors in the marginal and conditional probability distributions. In
this section we will analyze the errors introduced in the prior and posterior distributions
upon belief network approximation by removal of an arc. These effects can be summarized as
introducing both a change in the qualitative (ignoring any virtual arcs) as well as a change
in the quantitative representation of a joint probability distribution.
The Qualitative Error in Prior and Posterior Distributions
The change in the qualitative belief network representation of the probabilistic dependency
structure by removing an arc from a belief network is described by the following lemma.
Lemma 3.2 Let G be an acyclic digraph and let V r be an arc in G. Let
be the digraph G with arc removed, that is,
g. Then, we have that hfV r g j
G Vr 6!Vs
Proof. To prove that hfV r
G Vr 6!Vs
holds, we show that every chain
from vertex V r to vertex V s in G Vr6!Vs is blocked by the set G Vr 6!Vs (V s ). For such a chain
from V r to V s two cases can be distinguished:
ffl comprises an arc
chain is blocked by G Vr 6!Vs (V s );
ffl comprises an arc
is acyclic, must contain a head-to-head vertex V k , i.e. a vertex with two converging
arcs on . Since oe
G Vr 6!Vs
blocked by G Vr 6!Vs (V s ).The property states that after removing arc digraph G of a belief network, the
simplified graphical representation now yields that variable V r is conditionally independent
of variable V s given G Vr 6!Vs (V s ) being the set of immediate predecessors of V s in the digraph
G with arc
The Quantitative Error in the Prior Distribution
The change in the qualitative dependency structure portrayed by the network has its quantitative
counterpart as the two are inherently linked together in the belief network formalism.
To analyze the error of the approximated prior probability distribution, similar to [13, 22]
we use the Kullback-Leibler information divergence for a quantitative comparison in terms of
the divergence between the joint probability distribution defined by a belief network and the
approximated joint probability distribution obtained after removing an arc from the network.
To facilitate the investigation, we will give an expression for the approximated joint probability
distribution in terms of the original distribution. First, we will introduce some additional
notions related to arcs in a digraph that are useful for describing the properties that
These notions are build on the observation that the set of immediate predecessors
G Vr 6!Vs (V s ) d-separates tail vertex V r from head vertex V s in the digraph G with arc
removed.
Definition 3.3 Let be an acyclic digraph and let V r be an
arc in G. We define the arc block of V r denoted as fi G as the set of
vertices g. Furthermore, we define the arc environment of V r
in G, denoted as j G as the set of vertices
The joint probability distribution defined by the approximated belief network can be factorized
in terms of the joint probability distribution defined by the original network.
Lemma 3.4 Let (G; \Gamma) be a belief network and let Pr be the joint probability distribution
defined by B. Let V r be an arc in G and let B be the
approximated belief network after removal of V r defined in Definition 3.1. Then the
joint probability distribution Pr Vr6!Vs defined by B Vr6!Vs satisfies
Pr Vr6!Vs (C V (G)
where is the arc environment of V r
as defined in Definition 3.3.
Proof. From Theorem 2.5, the joint probability distribution Pr Vr6!Vs defined by network
Pr Vr6!Vs (C V (G)
Y
where
Exploiting Definition 3.1 leads to
Y
Now, since fl Vs (V s j C G (Vs )
)Clearly, this property links the graphical implications of removing an arc from a belief net-work
with the numerical probabilistic consequences of the removal; variable V r is rendered
conditionally independent of variable V s given G Vr 6!Vs (V s ) after removal of an arc V r
Now, one of the most important consequences to be investigated is the amount of absolute
divergence between the prior probability distribution and the approximated distribu-
tion. From the information inequality we have
for all subsets X ' V , where Pr and Pr Vr6!Vs are joint probability distributions on the
set of variables V defined by a belief network and the network with arc removed
respectively. However, we recall that this bound is finite only if Pr is absolutely continuous
with respect to Pr Vr6!Vs . We prove this property in the following lemma.
Lemma (G; \Gamma) be a belief network. Let V r be an arc in G and let
be the approximated belief network after removal of V r
defined in Definition 3.1. Then the joint probability distribution Pr defined by B is absolutely
continuous with respect to the joint probability distribution Pr Vr6!Vs defined by B Vr6!Vs over
Proof. To prove that Pr is absolutely continuous with respect to Pr Vr6!Vs over V (G), we
prove that Pr(c V (G) implies that Pr Vr6!Vs configurations c V (G) of
(G). First observe that from the chain rule of probability theory we have that
where is the arc environment of arc
s in G as defined by Definition 3.3. Now consider a configuration c V (G) of V (G) with
configuration we have that Pr(c j G (Vr !Vs
. Furthermore, Pr(c Vr " c Vs j c G (Vs )nfVr implies that
These observations lead to
Hence, if Pr(c we conclude that Pr Pr Vr6!Vs . 2
From this property of absolute continuity, the Kullback-Leibler information divergence provides
a proper upper bound on the error introduced in the joint probability distribution by
removal of an arc from the network. However, the bound can be rather coarse as it can be
expected that removing an arc may not always affect the prior probabilities of some specific
marginal distributions defined by the network. This observation is formalized by the following
lemma which states that the divergence in the prior marginal distributions is always zero for
sets of vertices that are not descendants of the head vertex of an arc that is removed. In
fact, this property is a direct result from the chain-rule representation of the joint probability
distribution by a belief network.
Lemma 3.6 Let (G; \Gamma) be a belief network and let Pr be the joint probability distribution
defined by B. Let V r be an arc in G and let B be the
approximated belief network after removal of V r defined in Definition 3.1. Then the
joint probability distribution Pr Vr6!Vs defined by B Vr6!Vs satisfies
Pr Vr6!Vs (C Y
for all Y ' V (G) n oe
denotes the set comprised by V s and all its descendants
Proof. First, we will prove that
Y
G (V s ). By applying Theorem 2.5 and by marginalizing Pr we obtain
Y
Y
Y
for all configurations c X of X with the assumption that the configurations that occur within
the sum adhere to c V
. Now since G (V k
we find by rearranging terms
Y
Y
Y
for all configurations c X of X. Hence, we have
Y
By a similar exposition for network B Vr6!Vs , we have
Pr Vr6!Vs (CX
Y
where
Now observe that from Definition 3.1
and we obtain Pr Vr6!Vs (CX by principle of marginalization we
conclude that Pr Vr6!Vs (C Y
This property provides the key observation for the applicability of multiple arc removals as
will be described in Section 3.4.
The Quantitative Error in Posterior Distributions
Belief networks are generally used for reasoning with uncertainty by processing evidence. That
is, the probability of some hypothesis is computed from the network given some evidence. In
the belief network framework, this amounts to computing the revised probabilities from the
posterior probability distribution given the evidence. We will investigate the implications on
posterior distributions after removal of an arc. We begin our investigation by exploring some
general properties of the Kullback-Leibler information divergence.
Lemma 3.7 Let V be a set of statistical variables and let X, Y ' V be subsets of V . Let
Pr and Pr 0 be joint probability distributions on V . Then the Kullback-Leibler information
divergence I satisfies
Proof. We distinguish two cases: the case that Pr Pr and the case that
ffl Assume that Pr Pr 0 k X[Y . This assumption implies that the information divergence
2.7 we therefore have that
c X[Y
Here, we used the fact that if for some configuration c 0
Y of the set of variables Y
the probability distribution
Y ) is undefined, that is, if Pr(c 0
any configuration c 0
of X the probability Pr(c 0
log(Pr(c 0
definition. Therefore, we let the first sum in the
last equality above range over all configurations c Y of Y for which Pr(c Y ) ? 0. Now by
rearranging terms we find
log
Note that I(Pr; Pr
ffl Assume that Pr 6 Pr This implies that I(Pr; Pr
show that I(Pr; Pr
observe that from the assumption there exists a configuration c 0
Y of X [ Y such
that Pr(c 0
two cases are distinguished: the case
that Pr 0 (c 0
and the case that Pr 0 (c 0
- Assume that Pr 0 (c 0
implies that Pr(c 0
yields that Pr 6 Pr 0 k Y . By Definition 2.7 I(Pr; Pr using the fact
that the divergence I is non-negative
to
- Assume that Pr 0 (c 0
for the configurations c 0
X and
Y . Hence,
and by Definition 2.7 this implies that
non-negative, we
conclude that I(Pr; Pr
property of the Kullback-Leibler information divergence leads to the following lemma
stating an upper bound on the absolute divergence of the posterior probability distribution
defined by a belief network given some evidence and the (approximated) posterior probability
distribution defined by another (approximated) network.
Lemma 3.8 Let V be a set of statistical variables and let Pr and Pr 0 be joint probability
distributions on V such that Pr Pr 0 . Let I be the Kullback-Leibler information divergence.
Then,
for all subsets of variables X, Y ' V and all configurations c Y of Y with
Furthermore, this upper bound on the absolute divergence is finite.
Proof. Consider two subsets X, Y ' V and a configuration c Y of Y with
this configuration, Pr Pr 0 implies that Pr 0 hence, the posterior distributions
are well-defined. Furthermore, since Pr Pr 0 also implies that
it follows from Proposition 2.8 that we have the finite upper bound
Furthermore, Lemma 3.7 yields that
Y )?0
When we consider the divergence isolation, we have
since for any configuration c 0
Y of Y with Pr(c 0
divergence
Y
is finite and non-negative. From these observations we finally find the finite upper bound
Y )Now, from this property of the information divergence, the absolute divergence between the
posterior distribution given evidence c Y for a subset of variables Y of a belief network B and
the approximated network B Vr6!Vs after removal of an arc V r ! V s is bounded by
where Pr is the joint probability distribution defined by B and Pr Vr6!Vs is the joint probability
distribution defined by B Vr6!Vs . This bound is finite since Pr is absolutely continuous
with respect to Pr Vr6!Vs . Furthermore, from this bound we find that in the worst case, i.e.
the error in probabilistic inference on an approximated belief net-work
is inversely proportional to the square root of the probability of the evidence; the more
unlikely the evidence, the larger the error may be.
3.4 Multiple Arc Removals
In this section we generalize the method of single arc removal from belief networks to a method
of multiple simultaneous arc removals, thereby still guaranteeing a finite upper bound on the
error introduced in the prior and posterior distributions.
We recall from Definition 3.1 that removing an arc yields an appropriate change of the
assessment functions only for the head vertex of the arc to be removed. Therefore, this
operation can be applied in parallel for all arcs not sharing the same head vertex. To formalize
this requirement, we introduce the notion of a linear subset of arcs of a digraph.
be an acyclic digraph with the set of vertices
indexed in ascending topological order. The relation OE G ' A(G) \Theta
A(G) on the set of arcs of G is defined as V r
pairs of arcs G. Furthermore, let A ' A(G) be a subset of arcs
in G. Then we say that A is linear with respect to G if the order OE G is a total order on A,
that is, either for each pair of distinct arcs
Note that a linear subset of arcs from a digraph contains no pair of arcs that have a head
vertex in common. Now, we formally define the simultaneous removal of a linear set of arcs
from a belief network.
(G; \Gamma) be a belief network. Let A ' A(G) be a linear subset of arcs
in G. We define the multiply approximated belief network, denoted as
the network resulting after the simultaneous removal of all arcs A from B by Definition 3.1.
That is, we obtain network
the digraph with V (GA
(G)g the set of functions
To analyze the error introduced in the prior as well as in the posterior distribution after
removal of a linear set of arcs from a belief network, we once more exploit the information
inequality. For obtaining a proper upper bound, the essential requirement is that the joint
probability distribution defined by the original network is absolutely continuous with respect
to the distribution defined by the multiply approximated network. To prove this, we will
exploit the ordering relation on the arcs of a digraph as defined above. This ordering relation
induces a total order on the arcs of a linear subset of arcs in a digraph and we show that
a consecutive removal of arcs from a belief network in arc linear order yields a multiply
approximated network. Then, by transitivity of the continuity relation, this directly implies
that the joint probability distribution defined by the original network is absolutely continuous
with respect to the distribution defined by the multiply approximated network.
Lemma 3.11 Let (G; \Gamma) be a belief network and let Pr be the joint probability distribution
defined by B. Let
1, be a linear
subset of arcs in G ordered with respect to OE G as defined in Definition 3.9, i.e. for all pairs
of arcs V r we have that i ! j. Now,
be the multiply approximated belief network after removal of all arcs A as
defined in Definition 3.10. Then,
Vrn 6!Vsn
where each (approximated) network on the right-hand side is approximated by removal of an
defined in Definition 3.1.
Proof. The proof is by induction on the cardinality of A.
Base case
For
holds as the
hypothesis for induction. Now, consider arc A. Then, by principle of in-
duction, to prove that
now have to prove that
. Obviously, the digraphs obtained after removal of this arc are
identical, i.e we have This leaves us with a proof for the probability
assessment functions. First, observe that the simultaneous removal of all arcs A from
network B yields network BA with probability assessment functions
where we have that fl 0
observe that the
removal of arc V rn!Vsn from network B AnfVrn !Vsn g yields probability assessment functions
which we find that fl 00
it remains to prove that fl 0
Vsn , or equivalently, that Pr(V sn j C G (Vsn )nfVrn
observe that from the ordering relation OE G we find that
all arcs A n fV rn ! V sn g that are removed from B are 'below' arc V rn ! V sn in the digraph G
of B, i.e. by assuming an ascending topological order of the vertices this implies that s i ? s n
g. Hence, ( G (V sn )[fV sn g)" oe
and by the induction hypothesis, we can apply Lemma 3.6 for each arc in
to find that Pr(V sn "C G (Vsn )nfVrn
thermore, this yields that Pr(V sn j C G (Vsn )nfVrn
Vsn and we conclude that
As a result of this property of multiple arc removals, the Kullback-Leibler information divergence
of the joint probability distribution defined by a belief network with respect to the
distribution defined by the multiply approximated network is finite. Furthermore, arc linearity
implies the following additive property of the Kullback-Leibler information divergence.
Lemma 3.12 Let (G; \Gamma) be a belief network and let Pr be the joint probability distribution
defined by B. Let A ' A(G) be a linear subset of arcs in G and let be the
multiply approximated belief network after removal of all arcs A as defined in Definition 3.10.
Let PrA be the joint probability distribution defined by BA . Then the Kullback-Leibler information
divergence I satisfies
Proof. First, we prove that Pr PrA . Assume that the arcs in the linear set A are ordered
according to the relation OE G as defined in Definition 3.9, i.e. for all pairs of arcs V r i
we have that i ! j. From Lemma 3.5 we find that
Pr Pr Vr 1
(Pr Vr 1
, . ,
is transitive, we conclude that Pr PrA by application
of Lemma 3.11. Now, with this observation we find
is linear, we have for each arc
new probability assessment function fl 0
for each
This leads to
log
Vs
that linearity of a set of arcs to be removed is a sufficient condition for the property
stated above, yet not a necessary one.
From these observations, we have that the information inequality provides a finite upper
bound on the error introduced in the prior and posterior distributions of an approximated
belief network after simultaneous removal of a linear set of arcs. This bound is obtained by
summing the information divergences between the joint probability distribution defined by
the network and the approximated distribution after removal of each arc individually from
the set of arcs.
Example 1 Consider the belief network (G; \Gamma) where G is the digraph depicted in
Figure
2.
Figure
2: Information divergence for each arc in the digraph of an example belief network.
A
Table
1: Information inequality and absolute divergence of an approximated example belief network.
The set \Gamma consists of the probability assessment functions fl
For each arc V r digraph G, the information divergence I(Pr; Pr Vr6!Vs ) between the
joint probability distribution Pr defined by B and the joint probability distribution Pr Vr6!Vs
defined by the approximated network B Vr6!Vs after removal of V r computed and
depicted next to each arc in Figure 2.
Note that despite the presence of arc are conditionally
independent given variable V 7 from the fact that fl V9 (V 9
this graphically portrayed dependence can be rendered redundant and arc can be
removed without introducing an error in the probability distribution since I(Pr; Pr V86!V9
as shown in Figure 2.
Table
1 gives the upper bound provided by the information inequality and the absolute
divergence of the approximated joint probability distributions after removal of various linear
subsets of arcs A from the network's digraph. The table is compressed by leaving out all
linear sets containing arc for the set fV 8 because the second and
third column are both unchanged after leaving out this arc. Note that any subset of arcs
containing both arcs 7 is not linear.
From this example, it can be concluded that the upper bound provided by the information
inequality exceeds the absolute divergence by a factor of 2 to 3. Furthermore, note that some
arcs have more weight in the value of the absolute divergence. For example, the absolute
divergence for all sets containing arc
Approximation Schemes
In this section we will present static and dynamic approximation schemes for belief networks.
These schemes are based on the observations made in the previous section.
4.1 A Static Approximation Scheme
Clearly, arcs that significantly reduce the computational complexity of inference on a belief
network upon removal are most desirable to remove. However, the error introduced upon
removal may not be too large. For each arc, the error introduced upon removal of the arc is
expressed in terms of the Kullback-Leibler information divergence.
Efficiently Computating the Information Divergence for each Arc
Unfortunately, straightforward computation of the Kullback-Leibler information divergence is
computationally far too expensive as it requires summing over all configurations of the entire
set of variables, an operation in the order of O(2 jV (G)j ). However, the following property
of the Kullback-Leibler information divergence can be exploited to compute the information
divergence locally.
Lemma 4.1 Let V be a set of statistical variables and let X, Y , Z ' V be mutually disjoint
subsets of V . Let Pr and Pr 0 be joint probability distributions on V such that Pr 0 (C
the Kullback-Leibler
information divergence I satisfies
Proof. By exploiting the factorization of Pr 0 in terms of Pr we find that Pr Pr 0 . Using
Definition 2.7 we derive
log Pr(c X[Y [Z )
Now, since
yields that
c X[Y[Z
c X[Y[Z
efficiently computing the Kullback-Leibler information divergence I(Pr; Pr Vr6!Vs ) for each
of a linear subset of arcs A of the digraph of a belief network, it suffices to
sum over all configurations of the arc block fi G only, which amounts
to computing the quantity
c G (Vs )[fVsg
log
which is derived by application of the chain rule from probability theory. Hence, the computation
of the information divergence I(Pr; Pr Vr6!Vs ) only requires the probabilities Pr(C G (Vs ) ),
to be computed from the original belief net-
work. In fact, the latter two sets of probabilities can simply be computed from the former set
of probabilities using marginalization:
and these conditional probabilities are further used to compute
Furthermore, once the probabilities Pr(C G (Vs )
are known, the divergence I(Pr; Pr Vr6!Vs ) for
that share the same head vertex V s can be computed simultaneously since
these computations only require the probabilities Pr(C G (Vs ) ).
Selecting a Set of Arcs for Removal
For selecting an optimal set of linear arcs for removal one should carefully weight the advantage
of the reduction in computational complexity in inference on a belief network and the
disadvantage of the error introduced in the represented joint probability distribution after
removal of the arcs.
Given a linear subset of arcs A from the digraph of a belief network B, we define the
function expressing the exact reduction in computational complexity of inference on network
B as
where K is a cost function expressing the computational complexity of inference on a net-
work. Furthermore, we define the exact divergence function d given arcs A on the probability
distribution Pr defined by network B as the absolute divergence
Note that function K depends on the algorithms used for probabilistic inference. For example,
if the clique-tree propagation algorithm of Lauritzen and Spiegelhalter is employed, K(B)
expresses the sum of the number of configurations of the sets of variables of the cliques of the
decomposable graph rendered upon moralization and subsequent triangulation of the digraph.
Then, K(BA ) expresses this complexity in terms of the approximated network B after removal
of arcs A. Here, we assume an optimal triangulation of the moral graphs of B and BA , since
a bad triangulation of the moral graph of BA may even yield a negative value for c(B; A).
If Pearl's polytree algorithm with cutset conditioning is employed, K(B) equals the number
of configurations of the set of variables of the loop cutset of the digraph. Now, an optimal
selection method weights the advantage expressed by c(B; disadvantage expressed by
removal of a set of arcs A from network B.
Unfortunately, an optimal selection scheme will first of all depend heavily on the algorithms
used for probabilistic inference and, secondly, will depend on the purpose of the
network within a specific application. Furthermore, it is rather expensive from a computational
point of view to evaluate the exact measures c and d for all possible linear subsets of
arcs. In general, the employment of heuristic measures for the selection of a near optimal set
of arcs for removal will suffice. To avoid costly evaluations for all possible subsets of arcs,
the heuristic measures should be based on combining the local advantages (or disadvantages)
of removing each arc individually. Such heuristic functions ~
c and "
d for respectively c and d,
expressing the impact on the computational complexity and error introduced by removing an
arc may be defined with various degrees of sophistication. In fact, the Kullback-Leibler information
divergence measures how well one joint probability distribution can be approximated
by another exhibiting a simpler dependence structure [22, 13]. Hence, instead of computing
the absolute divergence, the information inequality can be used:
where is the information divergence associated with each arc
as described in the previous section. Note that "
d now combines the divergence
of removing each arc separately and independently.
For defining a heuristic function ~
c valuing the reduction in computational complexity of
inference with exact methods for probabilistic inference upon removal of a set of arcs from a
belief network, the following scheme can be employed. The complexity of methods for exacts
inference depends to a large extend on the connectivity of the digraph of a belief network.
With each arc in the digraph G, a set of loops (undirected cycles), denoted
as loopset loopset of an arc consists of all loops in the digraph
containing the arc; a loopset of an arc provides local information on the role of the arc in the
connectivity of the digraph. This set can be found by a depth-first search for all chains from
in the graph, backtracking for all possibilities and storing the set of vertices found
along each chain in the form of bit-vector. Now, we define the heuristic function ~ c as
~ c(B;
loopset
i.e. ~ c expresses the number of distinct loops that are broken by removal of a set of arcs from
the digraph plus a fraction ff 2 (0; 1] of the the total number of arcs rendered superfluous.
The optimal value for ff depends on the algorithm used for exact probabilistic inference.
Now, a combined measure reflecting the trade-off between the advantage ~ c and disadvantage
d of arc removal may have the form
as suggested by Kjaerulff [13] where is chosen such that ~ c(B; A) is comparable to "
Function w expresses the desirability of removing a set of arcs from a belief network.
Now suppose that a maximum absolute error " ? 0 is allowed in probabilistic inference on a
multiply approximated belief network and further suppose that the probability of the evidence
to be processed is never smaller than some constant . Observe that from Lemma 3.8 a set
of arcs A can be safely removed from the network if 1I(Pr; PrA )=" 2 . Hence, an optimal
set of arcs can be found for removal if we solve the following optimization problem: maximize
w(B;A) for A ' A(G) subject to "
and A is linear. Note that the constraint
ensures that the error in the prior and posterior probability distribution never
exceeds ". This optimization problem can be solved by employing a simulated annealing
technique [12], or by using an evolutionary algorithm [17], to find a linear set of arcs for
removal that is nearly optimal. A 'real' optimal solution is not appropriate to search for,
since only heuristic functions are involved in the search process.
Example 2 Consider once more the belief network from Example 1. Suppose that the probability
of evidence to be processed by the approximated belief network does not exceed
and further suppose that the maximum absolute error allowed for the (conditional) probabilities
to be inferred from the approximated network is
First, three loops in G can be identified: loop 1 constitutes vertices
g.
Thus, the loopset of arc and the loopset of arc
c. The following table is obtained for "
A ~ c(B;
The linear set is the most desirable set of arcs for removal (w(B;
4:9547). Note that after removal, the graph GA is singly connected and, therefore, the network
is at least twice as fast for probabilistic inference compared to the original network using either
Pearl's polytree algorithm with cutset conditioning or the method of clique-tree propagation.
in
probability
Probability of evidence
Observed
Upper bound
Figure
3: Posterior error in probabilities inferred from an approximated example belief network.
Actually, the probability of evidence that can be processed with the approximated net-work
such that the error in inferred probabilities is bounded by " requires that Pr(c Y
In Figure 3 we show the observed
maximum absolute error and upper bound
obtained for all evidence c Y , Y ' V (G), with Pr(c Y ) 0:205. 3
Efficiently Computing an Approximation of a Belief Network
Removal of a linear set of arcs from a belief network requires the computation of new set
of probability assessment functions that reflect the introduced qualitative conditional independence
with a quantitative conditional independence. We recall from Definition 3.1
that we have that the new probability assessment functions
removal of an arc V r
selected for removal only if the Kullback-Leibler information divergence I(Pr; Pr Vr6!Vs ) is sufficiently
small in order that the error introduced by approximating the network after removal
of bounded. The probabilities Pr(V
are in fact already computed
by the computation of the information divergence I(Pr; Pr Vr6!Vs ) for all arcs in the
digraph of a belief network. When these probabilities are stored temporarily, it suffices to
assign these probabilities to the new probability assessment functions of the head vertex of
each arc that is selected for removal.
4.2 A Dynamic Approximation Scheme
In this section we will consider belief networks with singly connected digraphs as a special case
for approximation. A singly connected digraph exhibits no loops, that is, at most one chain
exists between any two vertices in the digraph. For these networks, arcs can be removed
dynamically while evidence is being processed in contrast to a static removal of arcs as a
preprocessing phase before inference as described in the previous section. Therefore, the
computational complexity of processing evidence can be reduced depending on the evidence
itself and no estimate for a lower bound for the probability of the evidence has to be provided
in advance. A detailed description and analysis of the method is beyond the scope of this
paper. However, a practical outline of the scheme will be presented which is based on Pearl's
polytree algorithm.
First, we will show that all variables in the network retain their prior probabilities upon
removal of an arc.
Lemma 4.2 Let (G; \Gamma) be a belief network with a singly connected digraph G. Let Pr
be the joint probability distribution defined by B. Furthermore, let be an arc
in G and let B be the approximated belief network after removal of
defined in Definition 3.1. Let Pr Vr6!Vs be the joint probability distribution defined
by B Vr6!Vs . Then, Pr Vr6!Vs
Proof. Assume that the vertices of the singly connected digraph are indexed in ascending
topological order, i.e. for each pair of vertices directed path from V i to
in G we have that i ! j. The proof is by induction on the index i of variable V i .
Base case s: from Lemma 3.6 we have that Pr Vr6!Vs
For i s, we apply the chain rule and the principle of marginalization to obtain
Pr Vr6!Vs
c G Vr 6!Vs
c G Vr 6!Vs
where
singly connected, all variables are mutually
independent by the d-separation criterion. Hence, we have that Pr Vr6!Vs (c G Vr 6!Vs
). By the assumption that the vertices in G are ordered in ascending
topological order, for each by the induction
hypothesis assume that for each
by applying the principle of induction, we find
Pr Vr6!Vs
c G Vr 6!Vs
Y
c G Vr 6!Vs
Y
consider an in a singly connected digraph. In a singly connected digraph no
other chain exists from V r to V s except for the chain constituting the arc V r Therefore,
G Vr 6!Vs
holds on the singly connected digraph G Vr6!Vs for any
subset of variables Y ' V (G). From this observation, we have that the independence relationship
between the variables V r and V s given G Vr 6!Vs (V s ) remain unchanged after evidence
is given for any subset of variables. Informally speaking, this means that after evidence is
processed in a belief network, we can compute the Kullback-Leibler information divergence
between the posterior probability distribution defined by a belief network and the posterior
distribution of the approximated network after removal of an arc locally. Then, by a similar
exposition for the properties of the Kullback-Leibler information divergence applied on general
belief networks for multiple arc removals as presented in the previous sections, it can be
shown that
for belief network consisting of a singly connected digraph, where Pr is the joint probability
distribution defined by the network and PrA is the joint probability distribution defined by
the multiple approximated network after removal of all arcs A. We note that the computation
of the divergence is as expensive
on the computational resources as the computation of the causal and diagnostic messages
for vertex V s in Pearl's polytree algorithm assuming that logarithms require one time unit.
Furthermore, in fact, by using Pearl's polytree algorithm, arcs do not have to be physically
removed, the blocking of causal and diagnostic messages for updating the probability distribution
will suffice. With this observation, we envisage an approximate wave-front version
of the polytree algorithm where the sending of messages is blocked between two connected
vertices in the graph if the probabilistic dependency relationship between the vertices is very
weak. That is, we block all messages for which the information divergence per blocked arc is
small such that the total sum of the information divergences over all blocked arcs does not
exceed some predetermined constant for the maximum absolute error allowed in probabilistic
5 Discussion and Related Work
We have presented a scheme for approximating Bayesian belief networks based on model
simplification through arc removal. In this section we will compare the proposed method
with other methods for belief network approximation.
Existing belief network approximation methods are annihilating small probabilities from
belief universes [8], and removal of weak dependencies from belief universes [13]. Both methods
have proven to be very successful in reducing the complexity of inference on a belief network
on real-life applications using the Bayesian belief universe approach [9].
The method of annihilating small probabilities by Jensen and Andersen reduces the computational
effort of probabilistic inference when the method of clique-tree propagation is used
for probabilistic inference. The basic idea of the method is to eliminate configurations with
small probabilities from belief universes, accepting a small error in the probabilities inferred
from the network. To this end, the k smallest probability configurations are selected for
each belief universe where k is chosen such that the sum of the probabilities of the selected
configurations in the universe is less than some predetermined constant ". The constant "
determines the maximum error of the approximated prior probabilities. The belief universes
are then compressed to take advantage of the zeros introduced. Jensen and Andersen further
point out that if the range of probabilities of evidence is known in advance, the method can
be applied to approximate a belief network such that the error of the approximated posterior
probabilities computed from the network are bounded by some predetermined constant.
Similar to the method of annihilating small probabilities, the method of removal of weak
dependencies by Kjaerulff reduces the computational effort of probabilistic inference when
the method of clique-tree propagation is used. Kjaerulff's approximation method and the
method of annihilation are complementary techniques [13]. The basic idea of the method is
to remove edges from the chordal graph constructed from the digraph of a belief network that
model weak dependencies. The weaker the dependencies, the smaller the error introduced in
the represented joint probability distribution approximated upon removal of an edge. The
method operates on the junction tree of a belief network only. Given a constant ", a set of
edges can be removed sequentially such that the error introduced in the prior distribution is
smaller than ". Removal of an edge results in the decomposition of the clique containing the
edge into two or more smaller cliques which results in a simplification of the junction tree
thereby reducing the computational complexity of inference on the network.
In comparing the methods for approximating belief networks, we first of all find that
the method of annihilating small probabilities from belief universes introduces an error that
is inversely proportional to the probability of the evidence [8] while the methods based on
removing arcs introduces an error that is inversely proportional to the square root of the
probability of the evidence. Furthermore, since the original joint probability distribution is
absolutely continuous with respect to the approximated probability distribution, the processing
of evidence in an approximated belief network by our method is safe in the sense that no
undefined conditional probabilities will arise for evidence with a nonzero probability in the
original distribution; the evidence that can be processed in an approximated belief network is
a superset of the evidence that can be processed in the original network. Once more, this is in
contrast to the method of annihilating small probabilities from belief universes. On the other
hand, however, the advantage of annihilating small probabilities is that the method operates
on the quantitative part of a belief network only whereas arc removal methods change the
qualitative representation as well. This can be remedied by introducing virtual arcs to replace
removed arcs. Virtual arcs are not used in probabilistic inference.
The method presented in this paper has some similarities to Kjaerulff's method of removal
of weak dependencies from belief universes [13]. Both methods aim at reducing inference on a
belief network by removing arcs or edges. However, the independency statements we enforce
are of the in contrast to V r ??V s j C n fV r by Kjaerulff's
method where C ' V (G) denotes the clique containing the edge removed by Kjaerulff's
method. Furthermore, Kjaerulff's method of removal is based on the clique-tree propagation
algorithm only and restricts the removal to one edge from a clique at a time in order that the
error introduced is bounded by some predetermined constant. In contrast, our method allows a
larger set of arcs (edges) to be removed in parallel, still guaranteeing that the introduced error
to be bounded by some predetermined constant regardless of the algorithms for probabilistic
inference used.
To summarize the conclusions, the scheme we propose for approximating belief networks
operates directly on the digraph of a belief network, has a relatively low computational com-
plexity, provides a bound on the posterior error in the presence of evidence, and is independent
of the algorithms used for probabilistic inference.
Acknowledgements
The author would like to acknowledge valuable discussions with Linda van der Gaag of Utrecht
University, The Netherlands.
--R
Index Expression Belief Networks for Information Disclosure
The Computational Complexity of Probabilistic Inference using Bayesian Belief Networks
A Tutorial to Stochastic Simulation Algorithms for Belief Networks
Approximating Probabilistic Inference in Bayesian Belief Networks is NP-hard
Propagating uncertainty in Bayesian networks by probabilistic logic sam- pling
Approximations in Bayesian Belief Universes for Knowledge-based Systems
Bayesian updating in causal probabilistic networks by local computations
Use of Causal Probabilistic Networks as High LEvel Models in Computer Vision
Journal of the Australian Mathematical Society A
Reduction of Computational Complexity in Bayesian Networks through Removal of Weak Dependencies
Information Theory and Statistics
A lower bound for discriminating information in terms of variation
Local computations with probabilities on graphical structures and their application to expert systems
Genetic Algorithms
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
Probabilistic inference in multiply connected belief networks using loop cutsets
A Combination of Exact Algorithms for Inference on Bayesian Belief Networks
On Substantive Research Hypothesis
Graphical Models in Applied Multivariate Statistics
--TR
--CTR
Helge Langseth , Olav Bangs, Parameter Learning in Object-Oriented Bayesian Networks, Annals of Mathematics and Artificial Intelligence, v.32 n.1-4, p.221-243, August 2001
Marek J. Druzdzel , Linda C. van der Gaag, Building Probabilistic Networks: 'Where Do the Numbers Come From?' Guest Editors' Introduction, IEEE Transactions on Knowledge and Data Engineering, v.12 n.4, p.481-486, July 2000
Helge Langseth , Thomas D. Nielsen, Fusion of domain knowledge with data for structural learning in object oriented domains, The Journal of Machine Learning Research, 4, 12/1/2003
Magnus Ekdahl , Timo Koski, Bounds for the Loss in Probability of Correct Classification Under Model Based Approximation, The Journal of Machine Learning Research, 7, p.2449-2480, 12/1/2006
Rina Dechter , Irina Rish, Mini-buckets: A general scheme for bounded inference, Journal of the ACM (JACM), v.50 n.2, p.107-153, March
Russell Greiner , Christian Darken , N. Iwan Santoso, Efficient reasoning, ACM Computing Surveys (CSUR), v.33 n.1, p.1-30, March 2001 | belief network approximation;approximate probabilistic inference;information theory;model simplification;bayesian belief networks |
263489 | Closure properties of constraints. | Many combinatorial search problems can be expressed as constraint satisfaction problems and this class of problems is known to be NP-complete in general. In this paper, we investigate the subclasses that arise from restricting the possible constraint types. We first show that any set of constraints that does not give rise to an NP-complete class of problems must satisfy a certain type of algebraic closure condition. We then investigate all the different possible forms of this algebraic closure property, and establish which of these are sufficient to ensure tractability. As examples, we show that all known classes of tractable constraints over finite domains can be characterized by such an algebraic closure property. Finally, we describe a simple computational procedure that can be used to determine the closure properties of a given set of constraints. This procedure involves solving a particular constraint satisfaction problem, which we call an indicator problem. | Introduction
Solving a constraint satisfaction problem is known to be NP-complete [20]. However,
many of the problems which arise in practice have special properties which allow them to
be solved efficiently. The question of identifying restrictions to the general problem which
are sufficient to ensure tractability is important from both a practical and a theoretical
viewpoint, and has been extensively studied.
Such restrictions may either involve the structure of the constraints, in other words,
which variables may be constrained by which other variables, or they may involve the
Earlier versions of parts of this paper were presented at the International Conferences on Constraint
Programming in 1995 and 1996. (See references [14] and [15].)
nature of the constraints, in other words, which combinations of values are allowed for
variables which are mutually constrained. Examples of the first approach can be found
in [7, 9, 12, 22, 23] and examples of the second approach can be found in [4, 14, 16, 8,
17, 22, 28, 29].
In this paper we take the second approach, and investigate those classes of constraints
which only give rise to tractable problems whatever way they are combined. A number
of distinct classes of constraints with this property have previously been identified and
shown to be maximal [4, 14, 16].
In this paper we establish that any class of constraints which does not give rise
to NP-complete problems must satisfy a certain algebraic closure condition, and hence
this algebraic property is a necessary condition for a class of constraints to be tractable
(assuming that P is not equal to NP). We also show that many forms of this algebraic
closure property are sufficient to ensure tractability.
As an example of the wide applicability of these results, we show that all known
examples of tractable constraint classes over finite domains can be characterized by an
algebraic condition of this kind, even though some of them were originally defined in very
different ways.
Finally, we describe a simple computational procedure to determine the algebraic
closure properties of a given set of constraints. The test involves calculating the solutions
to a fixed constraint satisfaction problem involving constraints from the given set.
The work described in this paper represents a generalization of earlier results concerning
tractable subproblems of the Generalized Satisfiability problem. Schaefer
[26] identified all possible tractable classes of constraints for this problem, which
corresponds to the special case of the constraint satisfaction problem in which the variables
are Boolean. The tractable classes described in [26] are special cases of the tractable
classes of general constraints described below, and they are given as examples.
A number of tractable constraint classes have also been identified by Feder and Vardi
in [8]. They define a notion of 'width' for constraint satisfaction problems in terms of the
logic programming language Datalog, and show that problems with bounded width are
solvable in polynomial time. It is stated in [8] that the problem of determining whether
a fixed collection of constraints gives rise to problems of bounded width is not known
to be decidable. However, it is shown that a more restricted property, called 'bounded
strict width', is decidable, and in fact corresponds to an algebraic closure property of the
form described here. Other tractable constraint classes are also shown to be characterised
by a closure property of this type. This paper builds on the work of Feder and Vardi
by examining the general question of the link between algebraic closure properties and
tractability of constraints, and establishing necessary and sufficient conditions for these
closure properties.
A different approach to identifying tractable constraints is taken in [28], where it
is shown that a property of constraints, referred to as 'row-convexity', together with
path-consistency, is sufficient to ensure tractability in binary constraint satisfaction prob-
lems. It should be noted, however, that because of the additional requirement for path-
consistency, row-convex constraints do not constitute a tractable class in the sense defined
in this paper. In fact, the class of problems which contain only row-convex constraints is
NP-complete [4].
The paper is organised as follows. In Section 2 we give the basic definitions, and in
Section 3 we define what we mean by an algebraic closure property for a set of relations
and examine the possible forms of such a closure property. In Section 4 we identify which
of these forms are necessary conditions for tractability, and in Section 5 we identify which
of them are sufficient for tractability. In Section 6 we describe a computational method
to determine the closure properties satisfied by a set of relations. Finally, we summarise
the results presented and draw some conclusions.
2.1 The constraint satisfaction problem
Notation 2.1 For any set D, and any natural number n, we denote the set of all n-tuples
of elements of D by D n . For any tuple t 2 D n , and any i in the range 1 to n,
we denote the value in the ith coordinate position of t by t[i]. The tuple t may then be
written in the form ht[1];
A subset of D n is called an n-ary relation over D.
We now define the (finite) constraint satisfaction problem which has been widely
studied in the Artificial Intelligence community [18, 20, 22].
Definition 2.2 An instance of a constraint satisfaction problem consists of
ffl a finite set of variables,
ffl a finite domain of values, D;
ffl a set of constraints fC g.
Each constraint C i is a pair (S i list of variables of length m i ,
called the constraint scope, and R i is an m i -ary relation over D, called the constraint
relation. (The tuples of R i indicate the allowed combinations of simultaneous
values for the variables in S i .)
The length of the tuples in the constraint relation of a given constraint will be called the
arity of that constraint. In particular, unary constraints specify the allowed values for a
single variable, and binary constraints specify the allowed combinations of values for a
pair of variables. A solution to a constraint satisfaction problem is a function from the
variables to the domain such that the image of each constraint scope is an element of the
corresponding constraint relation.
Deciding whether or not a given problem instance has a solution is NP-complete in
general [20] even when the constraints are restricted to binary constraints. In this paper
we shall consider how restricting the allowed constraint relations to some fixed subset of
all the possible relations affects the complexity of this decision problem. We therefore
make the following definition.
Definition 2.3 For any set of relations, \Gamma, CSP(\Gamma) is defined to be the decision problem
with
INSTANCE: An instance, P , of a constraint satisfaction problem, in which all constraint
relations are elements of \Gamma.
Does P have a solution?
If there exists an algorithm which solves every problem instance in CSP(\Gamma) in polynomial
time, then we shall say that CSP(\Gamma) is a tractable problem, and \Gamma is a tractable set of
relations.
Example 2.4 The binary disequality relation over a set D, denoted 6= D , is defined as
Note that CSP(f6=D g) corresponds precisely to the Graph jDj-Colorability problem
[11]. This problem is tractable when jDj - 2 and NP-complete when jDj - 3.Example 2.5 Consider the ternary relation ffi over the set which is defined
by
The problem CSP(fffig) corresponds precisely to the Not-All-Equal Satisfiability
problem [11], which is NP-complete [26]. 2
Example 2.6 We now describe three relations which will be used as examples of constraint
relations throughout the paper.
Each of these relations is a set of tuples of elements from the domain
as defined below:
The problem CSP(fR 1 consists of all constraint satisfaction problem instances
in which the constraint relations are all chosen from the set fR 1 g.
The complexity of CSP(\Gamma) for arbitrary subsets \Gamma of fR 1 will be determined
using the techniques developed later in this paper (see Example 6.6). 2
2.2 Operations on relations
In Section 4 we shall examine conditions on a set of relations \Gamma which allow known NP-complete
problems to be reduced to CSP(\Gamma). The reductions will be described using
standard operations from relational algebra [1], which are described in this section.
Definition 2.7 We define the following operations on relations.
ffl Let R be an n-ary relation over a domain D and let S be an m-ary relation over
D. The Cartesian product R \Theta S is defined to be the (n +m)-ary relation
R \Theta
ffl Let R be an n-ary relation over a domain D. Let 1 - n. The equality
selection oe i=j (R) is defined to be the n-ary relation
oe
ffl Let R be an n-ary relation over a domain D. Let i be a subsequence of
n. The projection -
(R) is defined to be the m-ary relation
It is well-known that the combined effect of two constraints in a constraint satisfaction
problem can be obtained by performing a relational join operation [1] on the two constraints
[12]. The next result is a simple consequence of the definition of the relational
join operation.
Lemma 2.8 Any relational join of relations R and S can be calculated by performing a
sequence of Cartesian product, equality selection, and projection operations on R and S.
In view of this result, it will be convenient to use the following notation.
Notation 2.9 The set of all relations which can be obtained from a given set of relations,
\Gamma, using some sequence of Cartesian product, equality selection, and projection operations
will be denoted
Note contains exactly those relations which can be obtained as 'derived' relations
in a constraint satisfaction problem instance with constraint relations chosen from \Gamma [2].
3 Closure operations
We shall establish below that significant information about CSP(\Gamma) can be determined
from algebraic properties of the set of relations \Gamma. In order to describe these algebraic
properties we need to consider arbitrary operations on D, in other words, arbitrary
functions from D k to D, for arbitrary values of k.
For the results below we shall be particularly interested in certain special kinds of
operations. We therefore make the following definition:
Definition 3.1
Let\Omega be a k-ary operation from D k to D.
If\Omega is such that, for all d 2
then\Omega is said to be idempotent.
ffl If there exists an index i 2 kg such that for all hd 1
is a non-constant unary operation on D,
then\Omega is called essentially unary. (Note that f is required to be non-constant, so
constant operations are not essentially unary.)
If f is the identity operation,
then\Omega is called a projection.
ffl If k - 3 and there exists an index i 2 kg such that for all d 1
with
but\Omega is not a projection,
then\Omega is called a semiprojection [25, 27].
then\Omega is called a majority operation.
are binary operations on D such that hD; +; \Gammai is an Abelian group [27],
is called an affine operation.
Any operation on D can be extended to an operation on tuples over D by applying the
operation in each coordinate position separately (i.e., pointwise). Hence, any operation
defined on the domain of a relation can be used to define an operation on the tuples in
that relation, as follows:
Definition 3.2
D be a k-ary operation on D and let R be an n-ary
relation over D.
For any collection of k tuples, t necessarily all distinct) the
defined as follows:
Finally, we
define\Omega (R) to be the n-ary relation
Using this definition, we now define the following closure property of relations.
Definition 3.3
Let\Omega be a k-ary operation on D, and let R be an n-ary relation over
D. The relation R is closed
under\Omega if\Omega (R) ' R.
Example 3.4 Let 4 be the ternary majority operation defined as follows:
x otherwise.
The relation R 2 defined in Example 2.6 is closed under 4, since applying the 4
operation to any 3 elements of R 2 yields an element of R 2 . For example,
The relation R 3 defined in Example 2.6 is not closed under 4, since applying the 4
operation to the last 3 elements of R 3 yields a tuple which is not an element of R 3
For any set of relations \Gamma, and any
operation\Omega , if every R 2 \Gamma is closed
under\Omega , then we
shall say that \Gamma is closed
under\Omega . The next lemma indicates that the property of being
closed under some operation is preserved by all possible projection, equality selection,
and product operations on relations, as defined in Section 2.2.
Lemma 3.5 For any set of relations \Gamma, and any
closed
under\Omega ,
closed
under\Omega .
Proof: Follows immediately from the definitions.
Notation 3.6 For any set of relations \Gamma with domain D, the set of all operations on D
under which \Gamma is closed will be denoted \Gamma .
The set of closure operations, \Gamma , can be used to obtain a great deal of information about
the problem CSP(\Gamma), as we shall demonstrate in the next two Sections.
As a first example of this, we shall show that the operations in \Gamma can be used obtain
a reduction from one problem to another.
Proposition 3.7 For any set of finite relations \Gamma, and
there is a polynomial-time
reduction from CSP(\Gamma) to
CSP(\Omega (\Gamma)),
\Gammag.
Proof: Let P be any problem instance in CSP(\Gamma) and consider the instance P 0 obtained
by replacing each constraint relation R i of P by the
relation\Omega (R i ). It is clear that P 0
can be obtained from P in polynomial-time. It follows from Definition 3.3 that P 0 has a
solution if and only if P has a solution.
It follows from this result that if \Gamma contains a non-injective unary operation, then
CSP(\Gamma) can be reduced to a problem over a smaller domain. One way to view this is
that the presence of a non-injective unary operation in \Gamma indicates that constraints with
relations chosen from \Gamma allow a form of global 'substitutability', similar to the notion
defined by Freuder in [10].
does not contain any non-injective unary operations, then we shall say that \Gamma is
reduced. The next theorem uses a general result from universal algebra [25, 27] to show
that the possible choices for \Gamma are quite limited.
Theorem 3.8 For any reduced set of relations \Gamma, on a finite set, either
contains essentially unary operations only, or
2. \Gamma contains an operation which is
(a) a constant operation; or
(b) a majority operation; or
(c) an idempotent binary operation (which is not a projection); or
(d) an affine operation; or
(e) a semiprojection.
Proof: The set of operations \Gamma contains all projections and is closed under composition,
hence it constitutes a 'clone' [3, 27]. It was shown in [25] that any non-trivial clone on a
finite set must contain a minimal clone, and that any minimal clone contains either
1. a non-identity unary operation; or
2. a constant operation; or
3. a majority operation; or
4. an idempotent binary operation (which is not a projection); or
5. an affine operation; or
6. a semiprojection.
Furthermore, if \Gamma is reduced, and \Gamma contains any operations which are not essentially
unary, then it is straightforward to show, by considering such an operation of the smallest
possible arity, that \Gamma contains an operation in one of the last five of these classes [27, 19].
In the next two Sections we shall examine each of these possibilities in turn, in order to
establish what can be said about the complexity of CSP(\Gamma) in the various cases.
4 A necessary condition for tractability
In this Section we will show that any set of relations which is only closed under essentially
unary operations will give rise to a class of constraint satisfaction problems which is NP-complete
Theorem 4.1 For any finite set of relations, \Gamma, over a finite set D, if \Gamma contains
essentially unary operations only then CSP(\Gamma) is NP-complete.
Proof: When jDj - 2, then we may assume without loss of generality that D ' f0; 1g,
where 0 corresponds to the Boolean value false and 1 corresponds to the Boolean value
true. It follows that the problem CSP(\Gamma) corresponds to the Generalised Satisfiability
problem over the set of Boolean relations \Gamma, as defined in [26] (see also [11]).
It was established in [26] that this problem is NP-complete unless one of the following
conditions holds:
1. Every relation in \Gamma contains the tuple (0;
2. Every relation in \Gamma contains the tuple (1;
3. Every relation in \Gamma is definable by a formula in conjunctive normal form in which
each conjunct has at most one negated variable;
4. Every relation in \Gamma is definable by a formula in conjunctive normal form in which
each conjunct has at most one unnegated variable;
5. Every relation in \Gamma is definable by a formula in conjunctive normal form in which
each conjunct contains at most 2 literals;
Every relation in \Gamma is the set of solutions of a system of linear equations over the
finite field GF(2).
It is straightforward to show that in each of these cases \Gamma is closed under some operation
which is not essentially unary (see [13] for details). Hence the result holds when 2.
For larger values of jDj we proceed by induction. Assume that jDj - 3 and the
result holds for all smaller values of jDj. Let
M be an m by n matrix over D in which the columns consist of all possible m-tuples
over D (in some order). Let R 0 be the relation consisting of all the tuples occuring as
rows of M . The only condition we place on the choice of order for the columns of M is
that - 1;2 (R 0 is the binary disequality relation over D, as defined in
Example 2.4.
We now construct a relation -
R 0 which is the 'closest approximation' to R 0 that we can
obtain from the relations in \Gamma and the domain D using the Cartesian product, equality
selection and projection operations:
Since this is a finite intersection, and intersection is a special case of join, we have from
Lemma 2.8 that -
In other words, the relation -
R 0 can be obtained as a
derived constraint relation in some problem belonging to CSP(\Gamma).
There are now two cases to consider:
1. If there exists some tuple t
then we will construct, using
an appropriate operation under which \Gamma is closed.
Define the
is the
unique column of M corresponding to the m-tuple hd We will show
that \Gamma is closed
under\Omega .
Choose any R 2 \Gamma, and let p be the arity of R. We are required to show that R is
closed
under\Omega . Consider any sequence t of tuples of R (not necessarily
be the m-tuple ht 1 [i]; t
each pair of indices, i; j, such that c apply the equality selection oe i=j to R,
to obtain a new relation R 0 .
Now choose a maximal set of indices, I g, such that the corresponding
c i are all distinct, and construct the relation R Finally,
permute the coordinate positions of R 00 (by a sequence of Cartesian product, equality
selection, and projection operations), such that R 00 ' R 0 (this is always possible,
by the construction of R 0 and R 00 ). Since R 00 we know that t 0 is a
tuple of R 00 , by the definition of -
R 0 . Hence the appropriate projection of t 0 is an
element of R, and R is closed
under\Omega .
If\Omega is not essentially unary, then we have the result. Otherwise, let f : D ! D be
the corresponding unary operation, and set
By the choice of t 0 , f cannot be injective, so jf(D)j ! jDj. By the inductive
hypothesis, we know that either CSP(f (\Gamma)) is NP-complete (in which case CSP(\Gamma)
must also be NP-complete) or else f (\Gamma) is closed under some operation \Phi which is
not essentially unary (in which case \Gamma is closed under the operation \Phif , which is
also not essentially unary). Hence, the result follows by induction in this case.
2. Alternatively, if -
R 0 contains no tuple t such that
. But this implies that CSP(f6=D g) is reducible to CSP(\Gamma), since
every occurence of the constraint relation 6= D can be replaced with an equivalent
collection of constraints with relations chosen from \Gamma. However, it was pointed out
in Example 2.4 that CSP(f6=D g) corresponds to the Graph jDj-Colorability
problem [11], which is NP-complete when jDj - 3. Hence, this implies that CSP(\Gamma)
is NP-complete, and the result holds in this case also.
Combining Theorem 4.1 with Theorem 3.8 gives the following necessary condition for
tractability.
Corollary 4.2 Assuming that P is not equal to NP, any tractable set of reduced relations
must be closed under either a constant operation, or a majority operation, or an
idempotent binary operation, or an affine operation, or a semiprojection.
Note that the arity of a semiprojection is at most jDj, so for any finite set D there are
only finitely many operations matching the given criteria, which means that there is a
finite procedure to check whether this necessary condition is satisfied (see Corollary 6.5,
below).
5 Sufficient conditions for tractability
We have shown in the previous section that when \Gamma is a tractable set of relations, then \Gamma
must contain an operation from a limited range of types. We now consider each of these
possibilities in turn, to determine whether or not they are sufficient to ensure tractability.
5.1 Constant operations
Closure under a constant operation is easily shown to be a sufficient condition for
tractability.
Proposition 5.1 For any set of relations \Gamma, if \Gamma is closed under a constant operation,
then CSP(\Gamma) is solvable in polynomial time.
Proof: If every relation in \Gamma is closed under some constant
operation\Omega , with constant
value d, then every non-empty relation in \Gamma must contain the tuple hd;
in this case, the decision problem for any constraint satisfaction problem instance P in
CSP(\Gamma) is clearly trivial to solve, since P either contains an empty constraint, in which
case it does not have a solution, or else P allows the solution in which every variable is
assigned the value d.
The class of sets of relations closed under some constant operation is a rather trivial
tractable class. It is referred to in [14] as Class 0.
Example 5.2 Let ? denote the unary operation on the domain which
returns the constant value 1. The constraint R 3 defined in Example 2.6 is closed under
?, since applying the ? operation to any element of R 3 yields the tuple h1; 1i, which is
an element of R 3 . The constraint R 2 defined in Example 2.6 is not closed under ?, since
applying the ? operation to any element of R 2 yields the tuple h1; 1; 1i, which is not an
element of R 2 . In fact, R 2 is clearly not closed under any constant operation. 2
Example 5.3 When there are only two possible constant operations
on D.
The first two tractable subproblems of the Generalised Satisfiability problem
identified by Schaefer in [26] correspond to the tractable classes of relations characterised
by closure under these two constant operations. 2
5.2 Majority operations
We will now show that closure under a majority operation is a sufficient condition for
tractability.
We first establish that when a relation R is closed under a majority operation, any
constraint involving R can be decomposed into binary constraints.
Proposition 5.4 Let R be a relation of arity n which is closed under a majority opera-
tion, and let C be any constraint constraining the variables in S with relation
R.
For any problem P with constraint C, the problem P 0 which is obtained by replacing
C by the set of constraints
ng
has exactly the same solutions as P.
Proof: It is clear that any solution to P is a solution to P 0 , since P 0 is obtained by
taking binary projections of a constraint from P.
Now let oe be any solution to We shall
prove, by induction on n, that t 2 R, thereby establishing that oe is a solution to P.
For 3 the result holds trivially, so assume that n - 3, and that the result holds for
all smaller values. Let I = ng be the set of indices of positions in S and choose
Proposition 3.5 and the inductive hypothesis, applied to - Infi j g (R), there
is some t j 2 R which agrees with t at all positions except i j , for 3. Since R is
closed under a majority operation, applying this operation to t
Example 5.5 Recall the relation R 2 defined in Example 2.6.
It was shown in Example 3.4 that R 2 is closed under the operation 4. Since this
operation is a majority operation, we know by Proposition 5.4 that any constraint with
relation R 2 can be decomposed into a collection of binary constraints with the following
relations:
It is, of course, not always the case that a constraint can be replaced by a collection of
binary constraints on the same variables. In many cases the binary projections of the
constraint relation allow extra solutions, as the following example demonstrates.
Example 5.6 Recall the relation ffi on domain defined in Example 2.5. The
binary projections of ffi are as follows:
The join of these binary projections contains the tuples h0; 0; 0i and h1; 1; 1i, which are
not elements of ffi. It clearly follows that ffi cannot be replaced by any set of binary
constraints on the same variables. 2
Theorem 5.7 Let \Gamma be any set of relations over a finite domain, D.
If \Gamma is closed under a majority operation, then CSP(\Gamma) is solvable in polynomial time.
Proof: For any problem instance P in CSP(\Gamma) we can impose strong (jDj
consistency [5] in polynomial time to obtain a new instance P 0 with the same solutions.
All of the constraints in P 0 are elements so they are closed under a majority
operation, by Proposition 3.5. Hence, all of the constraints of P 0 are decomposable into
binary constraints by Proposition 5.4. Hence, by Corollary 3.2 of [5], P 0 is solvable in
polynomial time.
Example 5.8 When there is only one possible majority operation
on D, (which is equal to the 4 operation defined in Example 3.4). It is easily shown
that all possible binary Boolean relations are closed under 4. Hence, it follows from
Proposition 5.4 that the Boolean relations of arbitrary arity which are closed under
this majority operation are precisely the relations which are definable by a formula in
conjunctive normal form in which each conjunct contains at most 2 literals. Hence, if a
set of Boolean relations \Gamma is closed under a majority operation, then CSP(\Gamma) is equivalent
to the 2-Satisfiability problem (2-Sat) [24], which is well-known to be a tractable
subproblem of the Satisfiability problem [26]. 2
Recall the class of tractable constraints identified independently in [4] and [17], and
referred to as 0/1/all constraints or implicational constraints. (This class of tractable
constraints is referred to as Class I in [14].) It was shown in [14] that these constraints
are in fact precisely the relations closed under the majority operation 4 defined in Example
3.4. This result is rather unexpected, in view of the fact that 0/1/all constraints
were originally defined purely in terms of their syntactic structure [4].
However, we remark here that the class of tractable sets of relations defined by closure
under some majority operation is a true generalization of the class containing all sets of
0/1/all constraints. In other words, there exist tractable sets of relations which are closed
under some majority operation but are not closed under the 4 operation, as the following
example demonstrates.
Example 5.9 Let - be the ternary majority operation on which returns
the median value of its three arguments (in the standard ordering of D).
Recall the relation R 3 defined in Example 2.6. It is easy to show that R 3 is closed
under -, since applying the - operation to any 3 elements of R 3 yields an element of R 3 .
For example,
Hence, by Theorem 5.7, CSP(fR 3 g) is tractable.
However, it was shown in Example 3.4 that R 3 is not closed under 4, and hence R 3
is not a 0/1/all constraint. 2
5.3 Binary operations
We first show that closure under an arbitrary idempotent binary operation is not in
general sufficient to ensure tractability.
Lemma 5.10 There exists a set of relations \Gamma closed under an idempotent binary operation
(which is not a projection) such that CSP(\Gamma) is NP-complete.
Proof: Consider the binary operation 2 on the set which is defined by
the following table:
This operation is idempotent but it is not a projection (in fact, it is an example of a form
of binary operation known as a 'rectangular band' [21].)
Now consider the functions which return the first
and second bit in the binary expression for the numerical value of each element of D.
Using these functions, we define ternary relations R 1 and R 2 over D, as follows:
Finally, we define R
It is easily shown that R is closed under 2, since applying the operation 2 to any 2
elements of R yields an element of R.
However, it can also be shown that the Not-All-Equal Satisfiability problem
[26], which is known to be NP-complete, is reducible in polynomial time to CSP(fRg).
Hence, CSP(fRg) is NP-complete, and the result follows.
We now describe some additional conditions which may be imposed on binary operations.
It will be shown below that closure under any binary operation satisfying these additional
conditions is a sufficient condition for tractability.
D be an idempotent binary operation on the set D such
that, for all d
Then u is said to be an ACI operation.
We will make use of the following result about ACI operations, which is well-known from
elementary algebra [3, 21].
Lemma 5.12 Let u be an ACI operation on the set D. The binary relation R on D
defined by
is a partial order on D in which any two elements have a least upper bound given
by u(d
It follows from Lemma 5.12 that any (finite) non-empty set D 0 ' D which is u-closed
contains a least upper bound with respect to the partial order R. This upper bound will
be denoted u(D 0 ).
Using Lemma 5.12, we now show that relations which are closed under some arbitrary
ACI operation form a tractable class.
Theorem 5.13 For any set of relations \Gamma over a finite domain D, if \Gamma is closed under
some ACI operation, then CSP(\Gamma) is solvable in polynomial time.
be a set of relations closed under the ACI operation u, and let P be any
problem instance in CSP(\Gamma). First enforce pairwise consistency to obtain a new instance
with the same set of solutions which is pairwise consistent. Such a P 0 can be obtained
by forming the join of every pair of constraints in P, replacing these constraints with the
(possibly smaller) constraints obtained by projecting down to the original scopes, and
then repeating this process until there are no further changes in the constraints. The
time complexity of this procedure is polynomial in the size of P, and the resulting P 0 is
a member of CSP(\Gamma all the constraint relations in P 0 are closed under u, by
Proposition 3.5.
Now let D(v) denote the set of values allowed for variable v by the constraints of P 0 .
Since D(v) equals the projection of some u-closed constraint onto v, it must be u-closed,
by Proposition 3.5. There are two cases to consider:
1. If any of the sets D(v) is empty then P 0 has no solutions, so the decision problem
is trivial.
2. On the other hand, if all of these sets are non-empty, then we claim that assigning
the value u(D(v)) to each variable v gives a solution to P 0 , so the decision problem
is again trivial. To establish this claim, consider any constraint
with relation R of arity n, and scope S. For each there must be
some tuple t i 2 R such that t i by the definition of D(S[i]). Now
consider the tuple We know that t 2 R, since R
is closed under u. Furthermore, for each i, because u(D(S[i])) is
an upper bound of D(S[i]), so
for all d 2 D(S[i]). Hence the constraint C allows the assignment of u(D(v)) to
each variable v in S. Since C was arbitrary, we have shown that this assignment is
a solution to P 0 , and hence a solution to P.
Example 5.14 When there are only two idempotent binary operations
on D (which are not projections), corresponding to the logical AND operation
and the logical OR operation. These two operations are both ACI operations, and they
correspond to the two possible orderings of D.
It is well-known [6, 16, 24] that a Boolean relation is closed under AND if and only if it
can be defined by a Horn sentence (that is, a conjunction of clauses each of which contains
at most one unnegated literal). Hence, if a set of Boolean relations \Gamma is closed under AND,
then CSP(\Gamma) is equivalent to the Horn clause satisfiability problem, Hornsat [24], which
is a tractable subproblem of the Satisfiability problem [26].
Similarly, a Boolean relation is closed under OR if and only if it can be defined by a
conjunction of clauses each of which contains at most one negated literal, and this class
or relations also gives rise to a tractable subproblem of the Satisfiability problem [26].Example 5.15 Let D be a finite subset of the natural numbers. The operation MAX :
which returns the larger of any pair of numbers is an ACI operation. The
following types of arithmetic constraints (amongst many others) are closed under this
operation:
where upper-case letters represent variables and lower-case letters represent positive con-
stants. Hence, by Theorem 5.13 it is possible to determine efficiently whether any collection
of constraints of these types has a solution. These constraints include (and extend)
the 'basic' arithmetic constraints allowed by the well-known constraint programming
language, CHIP [29]. 2
The class of tractable constraints first identified in [16], and referred to as max-closed
constraints, are in fact relations closed under an u operation with the additional property
that the partial order R, defined in Lemma 5.12, is a total ordering of D. Hence, a set
of constraints is max-closed if and only if the constraint relations are closed under some
specialized ACI operation of this kind (see, for example, Example 5.15). This class of
tractable constraints is referred to as Class II in [14].
However, we remark here that the class of tractable sets of relations defined by closure
under some ACI operation is a true generalization of the class containing all sets of max-
closed constraints. In other words, there exist tractable relations which are closed under
some ACI operation but are not closed under the maximum operation associated with
any (total) ordering of the domain. (An example of such a relation is the relation R 1
defined in Example 2.6, see Example 6.4 below.)
5.4 Affine operations
We will now show that closure under an affine operation is a sufficient condition for
tractability.
This result was established in [14] using elementary methods, for the special case when
the domain D contains a prime number, p, of elements. It was shown in [14] that, in this
special case, constraints which are closed under an affine operation correspond precisely
to constraints which may be expressed as conjunctions of linear equations modulo p.
(This class of tractable constraints is referred to as Class III in [14].)
We now generalise this result to arbitrary finite domain sizes by making use of a result
stated by Feder and Vardi in [8].
Theorem 5.16 ([8]) For any finite group G, and any set \Gamma of cosets of subgroups of
direct products of G, CSP(\Gamma) is solvable in polynomial time.
Corollary 5.17 For any set of relations \Gamma, if \Gamma is closed under an affine operation, then
CSP(\Gamma) is solvable in polynomial time.
Proof: By Definitions 3.1 and 3.2, any relation R which is closed under an affine
operation is a subset of a direct product of some Abelian group, with the property that
R. However, this is equivalent to saying that R is a
coset of a subgroup of this direct product group [21], so we may apply Theorem 5.16 to
\Gamma to obtain the result.
Example 5.18 Let r be the affine operation on which is defined by
addition and subtraction are both modulo 3.
The relation R 2 defined in Example 2.6 is closed under r, since applying the r
operation to any 3 elements of R 2 yields an element of R 2 . For example,
Since jDj is prime, the results of [14] indicate that R 2 must be the set of solutions to
some system of linear equations over the integers modulo 3. In fact, we have
Example 5.19 Let G be the Abelian group hD; +; \Gammai, where 3g and the
operation is defined by the following table:
Now let r be the affine operation on which is defined by r(d
addition and subtraction are as defined in G.
Any relation R over D which is a coset of a subgroup of a direct product of G will be
closed under r, and hence CSP(fRg) will be tractable by Corolllary 5.17. One example
of such a relation is the following:
It is easily seen that in this case R is not the set of solutions to any system of linear
equations over a field. 2
Example 5.20 When there is only one possible Abelian group structure
over D and hence only one possible affine operation on D.
If a set of Boolean relations \Gamma is closed under this affine operation, then CSP(\Gamma) is
equivalent to the problem of solving a set of simultaneous linear equations over the integers
modulo 2. This corresponds to the final tractable subproblem of the generalised
Satisfiability problem identified by Schaefer in [26]. 2
Semiprojections
We now show that closure under a semiprojection operation is not in general a sufficient
condition for tractability. In fact we shall establish a much stronger result, which shows
that even being closed under all semiprojections is not sufficient to ensure tractability.
Lemma 5.21 For any finite set D, with jDj - 3, there exists a set of relations \Gamma over
D, such that \Gamma is closed under all semiprojections on D, and CSP(\Gamma) is NP-complete.
Proof: Let D be a finite set with jDj - 3 and let d 1 ; d 2 be elements of D. Consider the
ig. This
relation is closed under all semiprojections on D, since any 3 elements of R contain at
most two distinct values in each coordinate position.
However, if we identify d 1 with the Boolean value true and d 2 with the Boolean
value false, then it is easy to see that CSP(fRg) is isomorphic to the Not-All-Equal
Satisfiability problem [11], which is NP-complete [26] (see Example 2.5).
It is currently unknown whether there are tractable sets of relations closed under some
combination of semiprojections, unary operations and binary operations which are not
included in any of the tractable classes listed above.
However, when the situation is very simple, as the next example shows.
Example 5.22 When there are no semiprojections on D, so there
are no subproblems of the Satisfiability problem which are characterised by a closure
operation of this form. 2
6 Calculating closure operations
For any set of relations \Gamma, over a set D, the operations under which \Gamma is closed are simply
mappings from D k to D, for some k, which satisfy certain constraints, as described in
Definition 3.3. In this Section we show that it is possible to identify these operations by
solving a single constraint satisfaction problem in CSP(\Gamma). In fact, we shall show that
these closure operations are precisely the solutions to a constraint satisfaction problem
of the following form.
Definition 6.1 Let \Gamma be a set of relations over a finite domain D.
For any natural number m ? 0, the indicator problem for \Gamma of order m is defined to
be the constraint satisfaction problem IP (\Gamma; m) with
ffl Set of variables D m ;
ffl Domain of values D;
ffl Set of constraints fC g, such that for each R 2 \Gamma, and for each sequence
of tuples from R, there is a constraint C
is the arity of R and
Example 6.2 Consider the relation R 1 over defined in Example 2.6.
The indicator problem for fR 1 g of order 1, IP(fR 1 g; 1), has 3 variables and 4 con-
straints. The set of variables is
and the set of constraints is
The indicator problem for fR 1 g of order 2, IP(fR 1 g; 2), has 9 variables and 16 con-
straints. The set of variables is
and the set of constraints is
Further illustrative examples of indicator problems are given in [15]. 2
Solutions to the indicator problem for \Gamma of order m are functions from D m to D, or in
other words, m-ary operations on D. We now show that they are precisely the m-ary
operations under which \Gamma is closed.
Theorem 6.3 For any set of relations \Gamma over domain D, the set of solutions to IP (\Gamma; m)
is equal to the set of m-ary operations under which \Gamma is closed.
Proof: By Definition 3.3, we know that \Gamma is closed under the m-ary
operation\Omega if and
only
if\Omega satisfies the
for each possible choice of R
(not necessarily all distinct). But this is equivalent to saying that
\Omega satisfies all the constraints in IP (\Gamma; m), so the result follows.
Example 6.4 Consider the relation R 1 over defined in Example 2.6.
The indicator problem for fR 1 g of order 1, defined in Example 6.2, has 2 solutions,
which may be expressed in tabular form as follows:
Variables
Solution
Solution
One of these solutions is a constant operation, so CSP(fR 1 g) is tractable, by Proposition
5.1. In fact, any problem in CSP(fR 1 g) has the solution which assigns the value 0
to each variable, so the complexity of CSP(fR 1 g) is trivial.
The indicator problem for fR 1 g of order 2, defined in Example 6.2, has 4 solutions,
which may be expressed in tabular form as follows:
Variables
Solution
Solution
Solution
Solution
The first of these solutions is a constant operation, and the second and third are essentially
unary operations. However, the fourth solution shown in the table is more
interesting. It is easily checked that this operation is an associative, commutative, idempotent
(ACI) binary operation, so we have a second proof that CSP(fR 1 g) is tractable,
by Theorem 5.13. Furthermore, this result shows that R 1 can be combined with any
other relations (of any arity) which are also closed under this ACI operation to obtain
larger tractable problem classes. 2
Corollary 6.5 For any set of relations \Gamma over a domain D, with jDj - 3, if all solutions
to IP (\Gamma; jDj) are essentially unary, then CSP(\Gamma) is NP-complete.
Proof: Follows from Theorem 3.8, Theorem 4.1, and Theorem 6.3.
Example 6.6 Recall the relations R 1 , R 2 and R 3 defined in Example 2.6. It has been
shown in Examples 6.4, 5.18, and 5.2 that a set containing any one of these relations on
its own is tractable.
For any set \Gamma containing more than one of these relations, it can be shown, using
Corollary 6.5, that CSP(\Gamma) is NP-complete. 2
In the special case when we obtain an even stronger result.
Corollary 6.7 For any set of relations \Gamma over a domain D, with solutions
to IP (\Gamma; are essentially unary then CSP(\Gamma) is NP-complete, otherwise it is polynomial.
Proof: It has been shown in Examples 5.3, 5.8, 5.14, 5.20, and 5.22 that when
possible closure operations of the restricted types specified in Corollary 4.2 are sufficient
to ensure tractability.
This result demonstrates that solving the indicator problem of order 3 provides a simple
and complete test for tractability of any set of relations over a domain with 2 elements.
This answers a question posed by Schaefer in 1978 [26] concerning the existence of an
efficient test for tractability in the Generalised Satisfiability problem. Note that
carrying out the test requires finding the solutions to a constraint satisfaction problem
with just 8 Boolean variables.
7 Conclusion
In this paper we have shown how the algebraic properties of relations can be used to
distinguish between sets of relations which give rise to tractable constraint satisfaction
problems and those which give rise to NP-complete problems. Furthermore, we have
proposed a method for determining the operations under which a set of relations is closed
by solving a particular form of constraint satisfaction problem, which we have called an
indicator problem.
For problems where the domain contains just two elements these results provide a
necessary and sufficient condition for tractability (assuming that P is not equal to NP),
and an efficient test to distinguish the tractable sets of relations.
For problems with larger domains we have described algebraic closure properties which
are a necessary condition for tractability. We have also shown that in many cases these
closure properties are sufficient to ensure tractability.
In particular, we have shown that closure under any constant operation, any majority
operation, any ACI operation, or any affine operation, is a sufficient condition for
tractability. It can be shown using the results of [13] that for any operation of one of
these types, the set, \Gamma, containing all relations which are closed under that operation is
a maximal set of tractable relations. In other words, the addition of any other relation
which is not closed under the same operation changes CSP(\Gamma) from a tractable problem
into an NP-complete problem. Hence, the tractable classes defined in this way are as
large as possible.
We are now investigating the application of these results to particular problem types,
such as temporal problems involving subsets of the interval algebra. We are also attempting
to determine how the presence of particular algebraic closure properties in the
constraints can be used to derive appropriate efficient algorithms for tractable problems.
Acknowledgments
This research was partially supported by EPSRC research grant GR/L09936 and by the
'British-Flemish Academic Research Collaboration Programme' of the Belgian National
Fund for Scientific Research and the British Council. We are also grateful to Martin
Cooper for many helpful discussions and for suggesting the proof of Theorem 5.7.
--R
"A relational model of data for large shared databanks"
"Derivation of constraints and database rela- tions"
"Characterizing tractable constraints"
"From local to global consistency"
"Structure identification in relational data"
"Network-based heuristics for constraint-satisfaction prob- lems"
"Monotone monadic SNP and constraint satisfaction"
"A sufficient condition for backtrack-bounded search"
"Eliminating interchangeable values in constraint satisfaction prob- lems"
Computers and intractability: a guide to the theory of NP-completeness
"Decomposing constraint satisfaction problems using database techniques"
"An algebraic characterization of tractable constraints"
"A unifying framework for tractable con- straints"
"A test for tractability"
"Tractable constraints on ordered domains"
"Fast parallel constraint satisfaction"
"On binary constraint problems"
"Classifying essentially minimal clones"
"Consistency in networks of relations"
"Networks of constraints: fundamental properties and applications to picture processing"
"Constraint relaxation may be perfect"
Computational Complexity
"Minimal clones I: the five types"
"The complexity of satisfiability problems"
Clones in
"On the Minimality and Decomposability of Row-Convex Constraint Networks"
"A generic arc-consistency algorithm and its specializations"
--TR
A sufficient condition for backtrack-bounded search
Network-based heuristics for constraint-satisfaction problems
Constraint relaxation may be perfect
From local to global consistency
Structure identification in relational data
A generic arc-consistency algorithm and its specializations
Fast parallel constraint satisfaction
Monotone monadic SNP and constraint satisfaction
On binary constraint problems
Decomposing constraint satisfaction problems using database techniques
Characterising tractable constraints
Tractable constraints on ordered domains
A relational model of data for large shared data banks
Computers and Intractability
A Unifying Framework for Tractable Constraints
The complexity of satisfiability problems
--CTR
Andrei Bulatov , Andrei Krokhin , Peter Jeavons, The complexity of maximal constraint languages, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.667-674, July 2001, Hersonissos, Greece
Yuanlin Zhang , Roland H. C. Yap, Consistency and set intersection, Eighteenth national conference on Artificial intelligence, p.971-972, July 28-August 01, 2002, Edmonton, Alberta, Canada
Richard E. Stearns , Harry B. Hunt, III, Exploiting structure in quantified formulas, Journal of Algorithms, v.43 n.2, p.220-263, May 2002
Hubie Chen, The expressive rate of constraints, Annals of Mathematics and Artificial Intelligence, v.44 n.4, p.341-352, August 2005
D. A. Cohen, Tractable Decision for a Constraint Language Implies Tractable Search, Constraints, v.9 n.3, p.219-229, July 2004
Peter Jeavons , David Cohen , Justin Pearson, Constraints and universal algebra, Annals of Mathematics and Artificial Intelligence, v.24 n.1-4, p.51-67, 1998
Lane A. Hemaspaandra, SIGACT news complexity theory column 34, ACM SIGACT News, v.32 n.4, December 2001
Lefteris M. Kirousis , Phokion G. Kolaitis, The complexity of minimal satisfiability problems, Information and Computation, v.187 n.1, p.20-39, November 25,
Andrei Bulatov , Martin Grohe, The complexity of partition functions, Theoretical Computer Science, v.348 n.2, p.148-186, 8 December 2005
Lane A. Hemaspaandra, SIGACT news complexity theory column 43, ACM SIGACT News, v.35 n.1, March 2004
Andrei A. Bulatov, H-Coloring dichotomy revisited, Theoretical Computer Science, v.349 n.1, p.31-39, 12 December 2005
Victor Dalmau , Peter Jeavons, Learnability of quantified formulas, Theoretical Computer Science, v.306 n.1-3, p.485-511, 5 September
David Cohen , Peter Jeavons , Richard Gault, New Tractable Classes From Old, Constraints, v.8 n.3, p.263-282, July
David Cohen , Peter Jeavons , Richard Gault, New tractable constraint classes from old, Exploring artificial intelligence in the new millennium, Morgan Kaufmann Publishers Inc., San Francisco, CA,
David Cohen , Peter Jeavons , Peter Jonsson , Manolis Koubarakis, Building tractable disjunctive constraints, Journal of the ACM (JACM), v.47 n.5, p.826-853, Sept. 2000
Hubie Chen, Periodic Constraint Satisfaction Problems: Tractable Subclasses, Constraints, v.10 n.2, p.97-113, April 2005
Richard Gault , Peter Jeavons, Implementing a Test for Tractability, Constraints, v.9 n.2, p.139-160, April 2004
Harry B. Hunt, III , Madhav V. Marathe , Richard E. Stearns, Strongly-local reductions and the complexity/efficient approximability of algebra and optimization on abstract algebraic structures, Proceedings of the 2001 international symposium on Symbolic and algebraic computation, p.183-191, July 2001, London, Ontario, Canada
Vctor Dalmau, A new tractable class of constraint satisfaction problems, Annals of Mathematics and Artificial Intelligence, v.44 n.1-2, p.61-85, May 2005
Peter Jonsson , Andrei Krokhin, Recognizing frozen variables in constraint satisfaction problems, Theoretical Computer Science, v.329 n.1-3, p.93-113, 13 December 2004
Martin Grohe , Dniel Marx, Constraint solving via fractional edge covers, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.289-298, January 22-26, 2006, Miami, Florida
Georg Gottlob , Nicola Leone , Francesco Scarcello, Hypertree decompositions and tractable queries, Proceedings of the eighteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.21-32, May 31-June 03, 1999, Philadelphia, Pennsylvania, United States
David A. Cohen , Martin C. Cooper , Peter G. Jeavons , Andrei A. Krokhin, The complexity of soft constraint satisfaction, Artificial Intelligence, v.170 n.11, p.983-1016, August 2006
Andrei A. Bulatov, A dichotomy theorem for constraint satisfaction problems on a 3-element set, Journal of the ACM (JACM), v.53 n.1, p.66-120, January 2006
Georg Gottlob , Francesco Scarcello , Martha Sideri, Fixed-parameter complexity in AI and nonmonotonic reasoning, Artificial Intelligence, v.138 n.1-2, p.55-86, June 2002
Andrei A. Bulatov , Vctor Dalmau, Towards a dichotomy theorem for the counting constraint satisfaction problem, Information and Computation, v.205 n.5, p.651-678, May, 2007
Moshe Y. Vardi, Constraint satisfaction and database theory: a tutorial, Proceedings of the nineteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.76-85, May 15-18, 2000, Dallas, Texas, United States
Hubie Chen, A rendezvous of logic, complexity, and algebra, ACM SIGACT News, v.37 n.4, December 2006
Georg Gottlob , Nicola Leone , Francesco Scarcello, The complexity of acyclic conjunctive queries, Journal of the ACM (JACM), v.48 n.3, p.431-498, May 2001 | complexity;NP-completeness;indicator problem;constraint satisfaction problem |
263499 | Constraint tightness and looseness versus local and global consistency. | Constraint networks are a simple representation and reasoning framework with diverse applications. In this paper, we identify two new complementary properties on the restrictiveness of the constraints in a networkconstraint tightness and constraint loosenessand we show their usefulness for estimating the level of local consistency needed to ensure global consistency, and for estimating the level of local consistency present in a network. In particular, we present a sufficient condition, based on constraint tightness and the level of local consistency, that guarantees that a solution can be found in a backtrack-free manner. The condition can be useful in applications where a knowledge base will be queried over and over and the preprocessing costs can be amortized over many queries. We also present a sufficient condition for local consistency, based on constraint looseness, that is straightforward and inexpensive to determine. The condition can be used to estimate the level of local consistency of a network. This in turn can be used in deciding whether it would be useful to preprocess the network before a backtracking search, and in deciding which local consistency conditions, if any, still need to be enforced if we want to ensure that a solution can be found in a backtrack-free manner. Two definitions of local consistency are employed in characterizing the conditions: the traditional variable-based notion and a recently introduced definition of local consistency called relational consistency. | Introduction
Constraint networks are a simple representation and reasoning framework. A
problem is represented as a set of variables, a domain of values for each variable,
and a set of constraints between the variables. A central reasoning task is then
to find an instantiation of the variables that satisfies the constraints. In spite of
the simplicity of the framework, many interesting problems can be formulated as
constraint networks, including graph coloring, scene labeling, natural language
parsing, and temporal reasoning.
In general, what makes constraint networks hard to solve is that they can
contain many local inconsistencies. A local inconsistency is an instantiation
of some of the variables that satisfies the relevant constraints that cannot be
extended to an additional variable and so cannot be part of any global solution.
If we are using a backtracking search to find a solution, such an inconsistency
can lead to a dead end in the search. This insight has led to the definition of
conditions that characterize the level of local consistency of a network [12, 18, 21]
and to the development of algorithms for enforcing local consistency conditions
by removing local inconsistencies (e.g., [2, 7, 10, 18, 21]).
Local consistency has proven to be an important concept in the theory and
practice of constraint networks for primarily two reasons. First, a common
method for finding solutions to a constraint network is to first preprocess the
network by enforcing local consistency conditions, and then perform a back-tracking
search. The preprocessing step can reduce the number of dead ends
reached by the backtracking algorithm in the search for a solution. With a
similar aim, local consistency techniques can be interleaved with backtracking
search. The effectiveness of using local consistency techniques in these two ways
has been studied empirically (e.g., [4, 6, 13, 14, 23]). Second, much previous
work has identified conditions for when a certain level of local consistency is
sufficient to guarantee that a network is globally consistent or that a solution
can be found in a backtrack-free manner (e.g., [5, 7, 11, 12, 21, 29]).
In this paper, we identify two new complementary properties on the restrictiveness
of the constraints in a network-constraint tightness and constraint
looseness-and we show their usefulness for estimating the level of local consistency
needed to ensure global consistency, and for estimating the level of local
consistency present in a network. In particular, we present the following results.
We show that, in any constraint network where the constraints have arity r
or less and tightness of m or less, if the network is strongly ((m+ 1)(r
consistent, then it is globally consistent. Informally, a constraint network is
strongly k-consistent if any instantiation of any k \Gamma 1 or fewer variables that
satisfies the constraints can be extended consistently to any additional variable.
Also informally, given an r-ary constraint and an instantiation of r \Gamma 1 of the
variables that participate in the constraint, the parameter m is an upper bound
AAAI-94 [27].
on the number of instantiations of the rth variable that satisfy the constraint.
In general, such sufficient conditions, bounding the level of local consistency
that guarantees global consistency, are important in applications where constraint
networks are used for knowledge base maintenance and there will be
many queries against the knowledge base. Here, the cost of preprocessing will
be amortized over the many queries. They are also of interest for their explanatory
power, as they can be used for characterizing the difficulty of problems
formulated as constraint networks.
We also show that, in any constraint network where the domains are of size
d or less, and the constraints have looseness of m or greater, the network is
strongly (dd=(d \Gamma m)e)-consistent 2 . Informally, given an r-ary constraint and
an instantiation of r \Gamma 1 of the variables that participate in the constraint, the
parameter m is a lower bound on the number of instantiations of the rth variable
that satisfy the constraint. The bound is straightforward and inexpensive to
determine. In contrast, all but low-order local consistency is expensive to verify
or enforce as the optimal algorithms to enforce k-consistency are O(n k d k ), for
a network with n variables and domains of size at most d [2, 24].
The condition based on constraint looseness is useful in two ways. First, it
can be used in deciding which low-order local consistency techniques will not
change the network and thus are not useful for processing a given constraint
network. For example, we use our results to show that the n-queens problem, a
widely used test-bed for comparing backtracking algorithms, has a high level of
inherent local consistency. As a consequence, it is generally fruitless to preprocess
such a network. Second, it can be used in deciding which local consistency
conditions, if any, still need to be enforced if we want to ensure that a solution
can be found in a backtrack-free manner.
Two definitions of local consistency are employed in characterizing the condi-
tions: the traditional variable-based notion and a recently introduced definition
of local consistency called relational consistency [9, 29].
Definitions and Preliminaries
A constraint network R is a set of n variables fx a domain D i of
possible values for each variable x and a set of t constraint relations
t. Each constraint relation R S i ,
t, is a subset of a Cartesian product of the form
We say that
constrains the variables fx j g. The arity of a constraint relation
is the number of variables that it constrains. The set fS is called the
scheme of R. We assume that S i Because we
assume that variables have names, the order of the variables constrained by a
relation is not important (see [26, pp. 43-45]). We use subsets of the integers
ng and subsets of the variables fx
2 dxe, the ceiling of x, is the smallest integer greater than or equal to x.
ng be a subset of the variables in a constraint network.
An instantiation of the variables in Y is an element of
. Given an
instantiation -a of the variables in Y , an extension of -a to a variable x
is the tuple (-a, a i ), where a i is in the domain of x i .
We now introduce a needed operation on relations adopted from the relational
calculus (see [26] for details).
S be a relation, let S be the set of variables
constrained by R S , and let S 0 ' S be a subset of the variables. The projection of
RS onto the set of variables S 0 , denoted \Pi S 0 (RS ), is the relation which constrains
the variables in S 0 and contains all tuples u 2
D j such that there exists
an instantiation -
a of the variables in S \Gamma S 0 such that the tuple (u; -a) 2 R S .
ng be a subset of the variables in a constraint network. An
instantiation -a of the variables in Y satisfies or is consistent with a relation R S i ,
the tuple \Pi S i (f-ag) 2 R S i . An instantiation -a of the variables in Y is
consistent with a network R if and only if for all S i in the scheme of R such that
a satisfies R S i . A solution to a constraint network is an instantiation
of all n of the variables that is consistent with the network.
constraint relation R of arity k is called m-tight if,
for any variable x i constrained by R and any instantiation -a of the remaining
constrained by R, either there are at most m extensions of -a to
x i that satisfy R, or there are exactly jD i j such extensions.
Definition 3 (m-loose) A constraint relation R of arity k is called m-loose if,
for any variable x i constrained by R and any instantiation -a of the remaining
constrained by R, there are at least m extensions of - a to x i that
satisfy R.
The tightness and looseness properties are complementary properties on the
constraints in a network. Tightness is an upper bound on the number of extensions
and looseness is a lower bound. With the exception of the universal
relation, a constraint relation that is m 1 -loose and m 2 -tight has
case where there are exactly jD i j extensions to variable x i is handled specially in
the definition of tightness). Every constraint relation is 0-loose and a constraint
relation is d-loose if and only if it is the universal relation, where the domains of
the variables constrained by the relation are of size at most d. Every constraint
relation is (d \Gamma 1)-tight and a constraint relation is 0-tight if and only if it is
either the null relation or the universal relation.
In what follows, we are interested in the least upper bound and greatest lower
bound on the number of extensions of the relations of a constraint network. It
is easy to check that:
Given a constraint network with t constraint relations each with
arity at most r and each with at most e tuples in the relation, determining the
least m such that all t of the relations are m-tight requires O(tre) time. The
same bound applies to determining the greatest m such that the relations are
m-loose.
Example 1. We illustrate the definitions using the following network R
with variables fx 1 domains
(b,b,b), (b,c,a), (c,a,b), (c,b,a), (c,c,c)g,
g. The projection of
is given by,
(R (c,c)g.
The instantiation -
(a,c,b) of the variables in is consistent
with R since \Pi S2
. The instantiation of the variables in Y is not consistent with
R since \Pi S2
(a,a,a,b) be an instantiation of all of the variables fx 1 g.
The tuple - a is consistent with R and is therefore a solution of the network. The
set of all solutions of the network is given by,
f(a,a,a,b), (a,a,c,b), (a,b,c,a), (b,a,c,b),
(b,c,a,c), (c,a,b,b), (c,b,a,c), (c,c,c,a)g.
It can be verified that all of the constraints are 2-tight and 1-loose. As
a partial verification of the binary constraint R S3 , consider the extensions to
variable x 3 given instantiations of the variable x 4 . For the instantiation - a =
(a) of x 4 there is 1 extension to x 3 ; for - a = (b) there are 3 extensions (but
so the definition of 2-tightness is still satisfied); and for - a = (c) there
is 1 extension.
Local consistency has proven to be an important concept in the theory and
practice of constraint networks. We now review previous definitions of local
consistency, which we characterize as variable-based and relation-based.
2.1 Variable-based local consistency
Mackworth [18] defines three properties of networks that characterize local con-
sistency: node, arc, and path consistency. Freuder [10] generalizes this to k-
consistency, which can be defined as follows:
Definition 4 (k-consistency, global consistency) A constraint network R
is k-consistent if and only if given
1. any k \Gamma 1 distinct variables, and
2. any instantiation - a of the variables that is consistent with R,
there exists an extension of -a to any kth variable such that the k-tuple is consistent
with R. A network is strongly k-consistent if and only if it is j-consistent
for all j - k. A strongly n-consistent network is called globally consistent,
where n is the number of variables in the network.
Node, arc, and path consistency correspond to one-, two-, and three-consis-
tency, respectively. Globally consistent networks have the property that a solution
can be found without backtracking [11].
(a)
(b)
Figure
1: (a) not 3-consistent; (b) not 4-consistent
Example 2. We illustrate the definition of k-consistency using the well-known
n-queens problem. The problem is to find all ways to place n-queens on
an n \Theta n chess board, one queen per column, so that each pair of queens does not
attack each other. One possible constraint network formulation of the problem
is as follows: there is a variable for each column of the chess board, x
the domains of the variables are the possible row positions, D
and the binary constraints are that two queens should not attack
each other. Consider the constraint network for the 4-queens problem. It can
be seen that the network is 2-consistent since, given that we have placed a single
queen on the board, we can always place a second queen such that the queens do
not attack each other. However, the network is not 3-consistent. For example,
given the consistent placement of two queens shown in Figure 1a, there is no
way to place a queen in the third column that is consistent with the previously
placed queens. Similarly the network is not 4-consistent (see Figure 1b).
2.2 Relation-based local consistency
In [29], we extended the notions of arc and path consistency to non-binary re-
lations. The new local consistency conditions were called relational arc- and
path-consistency. In [9], we generalized relational arc- and path-consistency to
relational m-consistency. In the definition of relational m-consistency, the relations
rather than the variables are the primitive entities. As we shall see in
subsequent sections, this allows expressing the relationships between the restrictiveness
of the constraints and local consistency in a way that avoids an explicit
reference to the arity of the constraints. The definition below is slightly weaker
than that given in [9].
Definition 5 (relational m-consistency) A constraint network R is relationally
m-consistent if and only if given
1. any m distinct relations R
2. any x 2
3. any instantiation - a of the variables in (
that is consistent
with R,
there exists an extension of - a to x such that the extension is consistent with the
relations. A network is strongly relationally m-consistent if it is relationally
j-consistent for every j - m.
Example 3. Consider the constraint network with variables fx 1
domains
(b,a,c)g.
g. The constraints are not
relationally 1-consistent. For example, the instantiation (a,b,b) of the variables
in is consistent with the network (trivially so, since S 1 6' Y and
does not have an extension to x 5 that satisfies R S1 . Similarly,
the constraints are not relationally 2-consistent. For example, the instantiation
(c,b,a,a) of the variables in fx 1 is consistent with the network (again,
trivially so), but it does not have an extension to x 5 that satisfies both R S1 and
RS2 . If we add the constraints R fx2g fag and R
fbg, the set of solutions of the network does not change, and it can be
verified that the network is both relationally 1- and 2-consistent.
When the constraints are all binary, relational m-consistency is identical
to variable-based (m the conditions are different.
While enforcing variable-based m-consistency can be done in polynomial time,
it is unlikely that relational m-consistency can be achieved tractably since even
solves the NP-complete problem of propositional satisfiability (see
Example 6). A more direct argument suggesting an increase in time and space
complexity is the fact that an algorithm may need to record relations of arbitrary
arity. As with variable-based local-consistency, we can improve the efficiency of
enforcing relational consistency by enforcing it only along a certain direction or
linear ordering of the variables. Algorithms for enforcing relational consistency
and directional relational consistency are given in [9, 28].
3 Constraint Tightness vs Global Consistency
In this section, we present relationships between the tightness of the constraints
and the level of local consistency sufficient to ensure that a network is globally
consistent.
Much work has been done on identifying relationships between properties of
constraint networks and the level of local consistency sufficient to ensure global
consistency. This work falls into two classes: identifying topological properties
of the underlying graph of the network (e.g., [7, 8, 11, 12]) and identifying
properties of the constraints (e.g., [3, 15, 16, 21, 29]). Dechter [5] identifies
the following relationship between the size of the domains of the variables, the
arity of the constraints, and the level of local consistency sufficient to ensure
the network is globally consistent.
Theorem 1 (Dechter [5]) If a constraint network with domains that are of
size at most d and relations that are of arity at most r is strongly
consistent, then it is globally consistent.
For some networks, Dechter's theorem is tight in that the level of local consistency
specified by the theorem is really required (graph coloring problems formulated
as constraint networks are an example). For other networks, Dechter's
theorem overestimates. Our results should be viewed as an improvement on
Dechter's theorem. By taking into account the tightness of the constraints, our
results always specify a level of strong consistency that is less than or equal to
the level of strong consistency required by Dechter's theorem.
The following lemma is needed in the proof of the main result.
l be l relations that constrain a variable x, let d be
the size of the domain of variable x, and let - a be an instantiation of all of
the variables except for x that are constrained by the l relations (i.e., -
a is an
instantiation of the variables in (S 1 [
1. each relation is m-tight, for some
2. for every subset of fewer relations from fR l g, there
exists at least one extension of -
a to x that satisfies each of the relations
in the subset,
then there exists at least one extension of - a to x that satisfies all l relations.
Proof. Let a 1 ; a a d be the d elements in the domain of x. We say that
a relation allows an element a i if the extension (-a; a i ) of - a to x satisfies the
relation. Assume to the contrary that an extension of - a to x that satisfies all of
the l relations does not exist. Then, for each element a i in the domain of x there
must exist at least one relation that does not allow a i . Let c i denote one of the
relations that does not allow a i . By construction, the set
is a set of relations for which there does not exist an extension of -a to x that
satisfies each of the relations in the set (every candidate a i is ruled out since c i
does not allow a i ). For every possible value of m, this leads to a contradiction.
Case 1 The contradiction is immediate as is
a set of relations of size m+ 1 for which there does not exist an extension to x
that satisfies every relation in the set. This contradicts condition (2).
Case The nominal size of the set 2.
We claim, however, that there is a repetition in c and that the true size of the
set is m+1. Assume to the contrary that c i 6= c j for i 6= j. Recall c i is a relation
that does not allow a i , g. This is a set
of m+ 1 relations so by condition (2) there must exist an a i that every relation
in the set allows. The only possibility is a d . Now consider fc g.
Again, this is a set of m relations so there must exist an a i that every
relation in the set allows. This time the only possibility is a d\Gamma1 . Continuing in
this manner, we can show that relation c 1 must allow a d ; a
must allow exactly m+1 extensions. This contradicts condition (1). Therefore,
it must be the case that c j. Thus, the set c is of size m
and this contradicts condition (2).
Case 3 The remaining cases are similar.
In each case we argue that (i) there are repetitions in the set
(ii) the true size of the set c is a contradiction is derived by
appealing to condition (2).
Thus, there exists at least one extension of - a to x that satisfies all of the relations.We first state the result using variable-based local consistency and then state
the result using relation-based local consistency.
Theorem 2 If a constraint network with relations that are m-tight and of arity
at most r is strongly ((m+1)(r \Gamma 1)+1)-consistent, then it is globally consistent.
Proof. Let 1. We show that any network with relations
that are m-tight and of arity at most r that is strongly k-consistent is
consistent for any i - 1.
Without loss of generality, let be a set of k
variables, let - a be an instantiation of the variables in X that is consistent with
the constraint network, and let x k+i be an additional variable. Using Lemma 1,
we show that there exists an extension of - a to x k+i that is consistent with the
constraint network. Let R l be l relations which are all and only the
relations which constrain only x k+i and a subset of variables from X. To be
consistent with the constraint network, the extension of -
a to x k+i must satisfy
each of the l relations. Now, condition (1) of Lemma 1 is satisfied since each of
the l relations is m-tight. It remains to show that condition (2) is satisfied. By
definition, the requirement of strong ((m 1)-consistency ensures
that any instantiation of any (m+ 1)(r \Gamma 1) or fewer variables that is consistent
with the network, has an extension to x k+i such that the extension is also
consistent with the network. Note, however, that since each of the l relations is
of arity at most r and constrains x k+i , each relation can constrain at most r \Gamma 1
variables that are not constrained by any of the other relations. Therefore, the
requirement of strong ((m 1)-consistency also ensures that for any
subset of m+1 or fewer relations from fR l g, there exists an extension
of - a to x k+i that satisfies each of the relations in the subset. Thus, condition (2)
of Lemma 1 is satisfied. Therefore, from Lemma 1 it follows that there is an
extension of - a to x k+i that is consistent with the constraint network. 2
Theorem 2 always specifies a level of strong consistency that is less than or
equal to the level of strong consistency required by Dechter's theorem (Theo-
rem 1). The level of required consistency is equal only when
As well, the theorem can sometimes be usefully applied if
theorem cannot.
As the following example illustrates, both r, the arity of the constraints, and
can change if the level of consistency required by the theorem is not present
and must be enforced. The parameter r can only increase; m can decrease,
as shown below, but also increase. The parameter m will increase if all of the
following hold: (i) there previously was no constraint between a set of variables,
(ii) enforcing a certain level of consistency results in a new constraint being
recorded between those variables and, (iii) the new constraint has a larger m
value than the previous constraints.
Example 4. Nadel [22] introduces a variant of the n-queens problem called
confused n-queens. The problem is to find all ways to place n-queens on an n \Theta n
chess board, one queen per column, so that each pair of queens does attack each
other. One possible constraint network formulation of the problem is as follows:
there is a variable for each column of the chess board, x the domains
of the variables are the possible row positions, D
and the binary constraints are that two queens should attack each other. The
constraint relation between two variables x i and x
The problem is worth considering, as Nadel [22] uses confused n-queens
in an empirical comparison of backtracking algorithms for solving constraint
networks. Thus it is important to analyze the difficulty of the problems to set
the empirical results in context. As well, the problem is interesting in that it
provides an example where Theorem 2 can be applied but Dechter's theorem
can not (since d - 1). Independently of n, the constraint relations are all
3-tight. Hence, the theorem guarantees that if the network for the confused
n-queens problem is strongly 5-consistent, the network is globally consistent.
First, suppose that n is even and we attempt to either verify or achieve this
level of strong consistency by applying successively stronger local consistency
algorithms. Kondrak [17] has shown that the following analysis holds for all n,
even.
1. Applying an arc consistency algorithm results in no changes as the network
is already arc consistent.
2. Applying a path consistency algorithm does tighten the constraints between
the variables. Once the network is made path consistent, the constraint
relations are all 2-tight. Now the theorem guarantees that if the
network is strongly 4-consistent, it is globally consistent.
3. Applying a 4-consistency algorithm results in no changes as the network
is already 4-consistent. Thus, the network is strongly 4-consistent and
therefore also globally consistent.
Second, suppose that n is odd. This time, after applying path consistency,
the networks are still 3-tight and it can be verified that the networks are not
4-consistent. Enforcing 4-consistency requires 3-ary constraints. Adding the
necessary 3-ary constraints does not change the value of m; the networks are
still 3-tight. Hence, by Theorem 2, if the networks are strongly 9-consistent, the
networks are globally consistent. Kondrak [17] has shown that recording 3-ary
constraints is sufficient to guarantee that the networks are strongly 9-consistent
for all n, Hence, independently of n, the networks are globally consistent
once strong 4-consistency is enforced.
Recall that Nadel [22] uses confused n-queens problems to empirically compare
backtracking algorithms for finding all solutions to constraint networks.
Nadel states that these problems provide a "non-trivial test-bed" [22, p.190].
We believe the above analysis indicates that these problems are quite easy and
that any empirical results on these problems should be interpreted in this light.
Easy problems potentially make even naive algorithms for solving constraint
networks look promising. To avoid this potential pitfall, backtracking algorithms
should be tested on problems that range from easy to hard. In general,
hard problems are those that require a high level of local consistency to ensure
global consistency. Note also that these problems are trivially satisfiable.
Example 5. The graph k-colorability problem can be viewed as a problem
on constraint networks: there is a variable for each node in the graph, the
domains of the variables are the k possible colors, and the binary constraints are
that two adjacent nodes must be assigned different colors. Graph k-colorability
provides examples of networks where both Theorems 1 and 2 give the same
bound on the sufficient level of local consistency since the constraints are
tight.
We now show how the concept of relation-based local consistency can be
used to alternatively describe Theorem 2.
Theorem 3 If a constraint network with relations that are m-tight is strongly
relationally (m 1)-consistent, then it is globally consistent.
Proof. We show that any network with relations that are m-tight that is
strongly relationally (m + 1)-consistent is k-consistent for any k - 1 and is
therefore globally consistent.
Without loss of generality, let be a set of
let -a be an instantiation of the variables in X that is consistent with the constraint
network, and let x k be an additional variable. Using Lemma 1, we show
that there exists an extension of -a to x k that is consistent with the constraint
network. Let R l be l relations which are all and only the relations
which constrain only x k and a subset of variables from X. To be consistent with
the constraint network, the extension of -a to x k must satisfy each of the l rela-
tions. Now, condition (1) of Lemma 1 is satisfied since each of the l relations is
m-tight. Further, condition (2) of Lemma 1 is satisfied since, by definition, the
requirement of strong relational (m+ 1)-consistency ensures that for any subset
of fewer relations, there exists an extension of -a to x k that satisfies
each of the relations in the subset. Therefore, from Lemma 1 it follows that
there is an extension of - a to x k that is consistent with the constraint network.As an immediate corollary of Theorem 3, if we know that the result of
applying an algorithm for enforcing strong relational (m 1)-consistency will
be that all of the relations will be m-tight, we can guarantee a priori that the
algorithm will return an equivalent, globally consistent network.
Example 6. Consider networks where the domains of the variables are of
size two. For example, the satisfiability of propositional CNFs provide an example
of networks with domains of size two. Relations which constrain variables
with domains of size two are 1-tight and any additional relations that are added
to the network as a result of enforcing strong relational 2-consistency will also be
1-tight. Thus, the consistency of such networks can be decided by an algorithm
that enforces strong relational 2-consistency. A different derivation of the same
result is already given by [5, 29].
A backtracking algorithm constructs and extends partial solutions by instantiating
the variables in some linear order. Global consistency implies that
for any ordering of the variables the solutions to the constraint network can be
constructed in a backtrack-free manner; that is, a backtracking search will not
encounter any dead-ends in the search. Dechter and Pearl [7] observe that it is
often sufficient to be backtrack-free along a particular ordering of the variables
and that local consistency can be enforced with respect to that ordering only.
Frequently, if the property of interest (in our case tightness and looseness) is
satisfied along that ordering we can conclude global consistency restricted to
that ordering as well. Enforcing relational consistency with respect to an ordering
of the variables can be done by a general elimination algorithm called
Directional-Relational-Consistency, presented in [9]. Such an algorithm
has the potential of being more effective in practice and in the worst-case as it
requires weaker conditions. Directional versions of the tightness and looseness
properties and of the results presented in this paper are easily formulated using
the ideas presented in [7, 9].
The results of this section can be used as follows. Mackworth [19] shows
that constraint networks can be viewed as a restricted knowledge representation
and reasoning framework. In this context, solutions of the constraint network
correspond to models of the knowledge base. Our results which bound the
level of local consistency needed to ensure global consistency, can be useful in
applications where constraint networks are used as a knowledge base and there
will be many queries against the knowledge base. Preprocessing the constraint
network so that it is globally consistent means that queries can be answered in
a backtrack-free manner.
An equivalent globally consistent representation of a constraint network is
highly desirable since it compiles the answers to many queries and it can be
shown that there do exist constraint networks and queries against the network
for which there will be an exponential speed-up in the worst case. As an exam-
ple, consider a constraint network with no solutions. The equivalent globally
consistent network would contain only null relations and an algorithm answering
a query against this constraint network would quickly return "yes." Of
course, of more interest are examples where the knowledge base is consistent.
Queries which involve determining if a value for a variable is feasible-can occur
in a model of the network-can be answered from the globally consistent
representation by looking only at the domain of the variable. Queries which
involve determining if the values for a pair of variables is feasible-can both
occur in a single model of the network-can be answered by looking only at
the binary relations which constrain the two variables. It is clear that a general
algorithm to answer a query against the original network, such as backtracking
search, can take an exponential amount of time to answer the above queries. In
general, a globally consistent representation of a network will be useful whenever
it is more compact than the set of all solutions to the network. With the
globally consistent representation we can answer any query on a subset of the
variables Y ' ng by restricting our attention to the smaller network
which consists of only the variables in Y and only the relations which constrain
the variables in Y . The global consistency property ensures that a solution for
all of the variables can also be created in a backtrack-free manner. However,
how our results will work in practice is an interesting empirical question which
remains open.
The results of this section are also interesting for their explanatory power,
as they can be used for characterizing the difficulty of problems formulated as
constraint networks (see the discussion at the end of the next section).
4 Constraint Looseness vs Local Consistency
In this section, we present a sufficient condition, based on the looseness of the
constraints and on the size of the domains of the variables, that gives a lower
bound on the inherent level of local consistency of a constraint network.
It is known that some classes of constraint networks already possess a certain
level of local consistency and therefore algorithms that enforce this level of local
consistency will have no effect on these networks. For example, Nadel [22]
observes that an arc consistency algorithm never changes a constraint network
formulation of the n-queens problem, for n ? 3. Dechter [5] observes that
constraint networks that arise from the graph k-coloring problem are inherently
strongly k-consistent. Our results characterize what it is about the structure of
the constraints in these networks that makes these statements true.
The following lemma is needed in the proof of the main result.
l be l relations that constrain a variable x, let d be
the size of the domain of variable x, and let - a be an instantiation of all of
the variables except for x that are constrained by the l relations (i.e., -
a is an
instantiation of the variables in (S 1 [
1. each relation is m-loose, for some
2. l -
l
d
d\Gammam
there exists at least one extension of -a to x that satisfies all l relations.
Proof. Let a 1 ; a a d be the d elements in the domain of x. We say that
a relation allows an element a i if the extension (-a; a i ) of - a to x satisfies the
relation. Now, the key to the proof is that, because each of the l relations is
m-loose, at least m elements from the domain of x are allowed by each relation.
Thus, each relation does not allow at most d \Gamma m elements, and together the l
relations do not allow at most l(d \Gamma m) elements from the domain of x. Thus, if
it cannot be the case that every element in the domain of x is not allowed by
some relation. Thus, if
l -
d
there exists at least one extension of -
a to x that satisfies all l relations. 2
We first state the result using variable-based local consistency and then
state the result using relation-based local consistency. Let binomial(k; r) be the
binomial coefficients, the number of possible choices of r different elements from
a collection of k objects. If k ! r, then binomial(k;
Theorem 4 A constraint network with domains that are of size at most d and
relations that are m-loose and of arity at least r, r - 2, is strongly k-consistent,
where k is the minimum value such that the following inequality holds,
Proof. Without loss of generality, let be a set of
variables, let - a be an instantiation of the variables in X that is consistent with
the constraint network, and let x k be an additional variable. To show that the
network is k-consistent, we must show that there exists an extension of -a to x k
that is consistent with the constraint network. Let R l be l relations
which are all and only the relations which constrain only x k and a subset of
variables from X. To be consistent with the constraint network, the extension
of - a to x k must satisfy each of the l relations. From Lemma 2, such an extension
exists if l - dd=(d \Gamma m)e \Gamma 1.
Now, the level of strong k-consistency is the minimum number of distinct
variables that can be constrained by the l relations. In other words, k is the
minimum number of variables that can occur in l . We know that
each of the relations constrains the variable x k . Thus, is the
minimum number of variables in (S fxg). The minimum
value of c occurs when all of the relations have arity r and thus each (S
l, is a set of r \Gamma 1 variables. Further, we know that each of the l
relations constrains a different subset of variables; i.e., if i 6= j, then S i
l. The binomial coefficients binomial(c; r \Gamma 1) tell us the number
of distinct subsets of cardinality r \Gamma 1 which are contained in a set of size c.
Thus, us the minimum number of variables c that are
needed in order to specify the remaining r \Gamma 1 variables in each of the l relations
subject to the condition that each relation must constrain a different subset of
variables. 2
Constraint networks with relations that are all binary are an important
special case of Theorem 4.
Corollary 1 A constraint network with domains that are of size at most d and
relations that are binary and m-loose is strongly
d
d\Gammam
-consistent.
Proof. All constraint relations are of arity
2. Hence, the minimum value of k such the inequality in
Theorem 4 holds is when
Theorem 4 always specifies a level of local consistency that is less than or
equal to the actual level of inherent local consistency of a constraint network.
That is, the theorem provides a lower bound. However, given only the looseness
of the constraints and the size of the domains, Theorem 4 gives as strong an
estimation of the inherent level of local consistency as possible as examples can
be given for all m ! d where the theorem is exact. Graph coloring problems
provide an example where the theorem is exact for n-queens
problems provide an example where the theorem underestimates the true level
of local consistency.
Example 7. Consider again the well-known n-queens problem discussed in
Example 2. The problem is of historical interest but also of theoretical interest
due to its importance as a test problem in empirical evaluations of backtracking
algorithms and heuristic repair schemes for finding solutions to constraint
networks (e.g., [13, 14, 20, 22]). For n-queens networks, each of the domains
is of size n and each of the constraints is binary and (n \Gamma 3)-loose. Hence,
Theorem 4 predicts that n-queens networks are inherently strongly (dn=3e)-
consistent. Thus, an n-queens constraint network is inherently arc-consistent
for inherently path consistent for n - 7, and so on, and we can predict
where it is fruitless to apply a low-order consistency algorithm in an attempt to
simplify the network (see Table 1). The actual level of inherent consistency is
bn=2c for n - 7. Thus, for the n-queens problem, the theorem underestimates
the true level of local consistency.
Table
1: Predicted (dn=3e) and actual (bn=2c, for n - 7) level of strong local
consistency for n-queens networks
pred.
actual
Example 8. Graph k-colorability provides an example where Theorem 4
is exact in its estimation of the inherent level of local consistency (see Example
5 for the constraint network formulation of graph coloring). As Dechter [5]
states, graph coloring networks are inherently strongly k-consistent but are not
guaranteed to be strongly 1)-consistent. Each of the domains is of size
k and each of the constraints is binary and 1)-loose. Hence, Theorem 4
predicts that graph k-colorability networks are inherently strongly k-consistent.
Example 9. Consider a formula in 3-CNF which can be viewed as a constraint
network where each variable has the domain ftrue, falseg and each clause
corresponds to a constraint defined by its models. The domains are of size two
and all constraints are of arity 3 and are 1-loose. The minimum value of k such
that the inequality in Theorem 4 holds is when 3. Hence, the networks are
strongly 3-consistent.
We now show how the concept of relation-based local consistency can be
used to alternatively describe Theorem 4.
Theorem 5 A constraint network with domains that are of size at most d and
relations that are m-loose is strongly relationally
d
d\Gammam
-consistent.
Proof. Follows immediately from Lemma 2. 2
The results of this section can be used in two ways. First, they can be used
to estimate whether it would be useful to preprocess a constraint network using
a local consistency algorithm, before performing a backtracking search (see, for
example, [6] for an empirical study of the effectiveness of such preprocessing).
Second, they can be used in conjunction with previous work which has identified
conditions for when a certain level of local consistency is sufficient to ensure a
solution can be found in a backtrack-free manner (see, for example, the brief
review of previous work at the start of Section 3 together with the new results
presented there). Sometimes the level of inherent strong k-consistency guaranteed
by Theorem 4 is sufficient, in conjunction with these previously derived
conditions, to guarantee that the network is globally consistent and therefore a
solution can be found in a backtrack-free manner without preprocessing. Oth-
erwise, the estimate provided by the theorem gives a starting point for applying
local consistency algorithms.
The results of this section are also interesting for their explanatory power.
We conclude this section with some discussion on what Theorem 2 and Theorem
4 contribute to our intuitions about hard classes of problems (in the spirit
of, for example, [1, 30]). Hard constraint networks are instances which give rise
to search spaces with many dead ends. The hardest networks are those where
many dead ends occur deep in the search tree. Dead ends, of course, correspond
to partial solutions that cannot be extended to full solutions. Networks where
the constraints are
that are close to d, the size of the domains of the variables, are good candidates
to be hard problems. The reasons are two-fold. First, networks that have
high looseness values have a high level of inherent strong consistency and strong
k-consistency means that all partial solutions are of at least size k. Second,
networks that have high tightness values require a high level of preprocessing to
be backtrack-free.
Computational experiments we performed on random problems with binary
constraints provide evidence that networks with constraints with high looseness
values can be hard. Random problems were generated with
and is the probability that there is a binary
constraint between two variables, and q=100 is the probability that a pair in
the Cartesian product of the domains is in the constraint. The time to find
one solution was measured. In the experiments we discovered that, given that
the number of variables and the domain size were fixed, the hardest problems
were found when the constraints were as loose as possible without degenerating
into the trivial constraint where all tuples are allowed. In other words, we
found that the hardest region of loose constraints is harder than the hardest
region of tight constraints. That networks with loose constraints would turn
out to be the hardest of these random problems is somewhat counter-intuitive,
as individually the constraints are easy to satisfy. These experimental results
run counter to Tsang's [25, p.50] intuition that a single solution of a loosely
constrained problem "can easily be found by simple backtracking, hence such
problems are easy," and that tightly constrained problems are "harder compared
with loose problems." As well, these hard loosely-constrained problems are not
amenable to preprocessing by low-order local consistency algorithms, since, as
Theorem 4 states, they possess a high level of inherent local consistency. This
runs counter to Williams and Hogg's [30, p.476] speculation that preprocessing
will have the most dramatic effect in the region where the problems are the
hardest.
Conclusions
We identified two new complementary properties on the restrictiveness of the
constraints in a network: constraint tightness and constraint looseness. Constraint
tightness was used, in conjunction with the level of local consistency, in
a sufficient condition that guarantees that a solution to a network can be found
in a backtrack-free manner. The condition can be useful in applications where
a knowledge base will be queried over and over and the preprocessing costs can
be amortized over many queries. Constraint looseness was used in a sufficient
condition for local consistency. The condition is inexpensive to determine and
can be used to estimate the level of strong local consistency of a network. This
in turn can be used in deciding whether it would be useful to preprocess the
network before a backtracking search, and in deciding which local consistency
conditions, if any, still need to be enforced if we want to ensure that a solution
can be found in a backtrack-free manner.
We also showed how constraint tightness and constraint looseness are of interest
for their explanatory power, as they can be used for characterizing the
difficulty of problems formulated as constraint networks and for explaining why
some problems that are "easy" locally, are difficult globally. We showed that
when the constraints have low tightness values, networks may require less pre-processing
in order to guarantee that a solution can be found in a backtrack-free
manner and that when the constraints have high looseness values, networks may
require much more search effort in order to find a solution. As an example, the
confused n-queens problem, which has constraints with low tightness values, was
shown to be easy to solve as it is backtrack-free after enforcing only low-order
local consistency conditions. As another example, many instances of crossword
puzzles are also relatively easy, as the constraints on the words that fit each slot
in the puzzle have low tightness values (since not many words have the same
length and differ only in the last letter of the word). On the other hand, graph
coloring and scheduling problems involving resource constraints can be quite
hard, as the constraints are inequality constraints and thus have high looseness
values.
Acknowledgements
The authors wish to thank Peter Ladkin and an anonymous referee for their
careful reading of a previous version of the paper and their helpful comments.
--R
Where the really hard problems are.
An optimal k-consistency algorithm
Characterising tractable constraints.
Enhancement schemes for constraint processing: Backjump- ing
From local to global consistency.
Experimental evaluation of preprocessing techniques in constraint satisfaction problems.
Tree clustering for constraint networks.
Local and global relational consistency.
Synthesizing constraint expressions.
A sufficient condition for backtrack-free search
A sufficient condition for backtrack-bounded search
Experimental case studies of backtrack vs. waltz-type vs
Increasing tree search efficiency for constraint satisfaction problems.
A test for tractability.
Fast parallel constraint satisfaction.
Personal Communication.
Consistency in networks of relations.
The logic of constraint satisfaction.
Solving large-scale constraint satisfaction and scheduling problems using a heuristic repair method
Networks of constraints: Fundamental properties and applications to picture processing.
Constraint satisfaction algorithms.
Hybrid algorithms for the constraint satisfaction problem.
On the complexity of achieving k-consistency
Foundations of Constraint Satisfaction.
Principles of Database and Knowledge-Base Systems
On the inherent level of local consistency in constraint net- works
Constraint tightness versus global consis- tency
On the minimality and global consistency of row-convex constraint networks
Using deep structure to locate hard problems.
--TR
A sufficient condition for backtrack-bounded search
Principles of database and knowledge-base systems, Vol. I
Network-based heuristics for constraint-satisfaction problems
Tree clustering for constraint networks (research note)
An optimal <italic>k</>-consistency algorithm
Enhancement schemes for constraint processing: backjumping, learning, and cutset decomposition
Constraint satisfaction algorithms
From local to global consistency
The logic of constraint satisfaction
Fast parallel constraint satisfaction
On the inherent level of local consistency in constraint networks
Characterising tractable constraints
Experimental evaluation of preprocessing algorithms for constraint satisfaction problems
On the minimality and global consistency of row-convex constraint networks
Local and global relational consistency
A Sufficient Condition for Backtrack-Free Search
Synthesizing constraint expressions
On the Complexity of Achieving K-Consistency
--CTR
Yuanlin Zhang , Roland H. C. Yap, Erratum: P. van Beek and R. Dechter's theorem on constraint looseness and local consistency, Journal of the ACM (JACM), v.50 n.3, p.277-279, May
Yuanlin Zhang , Roland H. C. Yap, Consistency and set intersection, Eighteenth national conference on Artificial intelligence, p.971-972, July 28-August 01, 2002, Edmonton, Alberta, Canada
Amnon Meisels , Andrea Schaerf, Modelling and Solving Employee Timetabling Problems, Annals of Mathematics and Artificial Intelligence, v.39 n.1-2, p.41-59, September
Paolo Liberatore, Monotonic reductions, representative equivalence, and compilation of intractable problems, Journal of the ACM (JACM), v.48 n.6, p.1091-1125, November 2001
Moshe Y. Vardi, Constraint satisfaction and database theory: a tutorial, Proceedings of the nineteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.76-85, May 15-18, 2000, Dallas, Texas, United States
Henry Kautz , Bart Selman, The state of SAT, Discrete Applied Mathematics, v.155 n.12, p.1514-1524, June, 2007 | relations;constraint networks;constraint-based reasoning;constraint satisfaction problems;local consistency |
263881 | Effective erasure codes for reliable computer communication protocols. | Reliable communication protocols require that all the intended recipients of a message receive the message intact. Automatic Repeat reQuest (ARQ) techniques are used in unicast protocols, but they do not scale well to multicast protocols with large groups of receivers, since segment losses tend to become uncorrelated thus greatly reducing the effectiveness of retransmissions. In such cases, Forward Error Correction (FEC) techniques can be used, consisting in the transmission of redundant packets (based on error correcting codes) to allow the receivers to recover from independent packet losses.Despite the widespread use of error correcting codes in many fields of information processing, and a general consensus on the usefulness of FEC techniques within some of the Internet protocols, very few actual implementations exist of the latter. This probably derives from the different types of applications, and from concerns related to the complexity of implementing such codes in software. To fill this gap, in this paper we provide a very basic description of erasure codes, describe an implementation of a simple but very flexible erasure code to be used in network protocols, and discuss its performance and possible applications. Our code is based on Vandermonde matrices computed over GF(pr), can be implemented very efficiently on common microprocessors, and is suited to a number of different applications, which are briefly discussed in the paper. An implementation of the erasure code shown in this paper is available from the author, and is able to encode/decode data at speeds up to several MB/s running on a Pentium 133. | Introduction
Computer communications generally require reliable 1 data transfers among the communicating
parties. This is usually achieved by implementing reliability at different levels in the protocol
This paper appears on ACM Computer Communication Review, Vol.27, n.2, Apr.97, pp.24-36.
y The work described in this paper has been supported in part by the Commission of European Communities,
Esprit Project LTR 20422 - "Moby Dick, The Mobile Digital Companion (MOBYDICK)", and in part by the
Ministero dell'Universit'a e della Ricerca Scientifica e Tecnologica of Italy.
1 Throughout this paper, with reliable we mean that data must be transferred with no errors and no losses.
stack, either on a link-by-link basis (e.g. at the link layer), or using end-to-end protocols at the
transport layer (such as TCP), or directly in the application.
ARQ (Automatic Repeat reQuest) techniques are generally used in unicast protocols: missing
packets are retransmitted upon timeouts or explicit requests from the receiver. When
the bandwidth-delay product approaches the sender's window, ARQ might result in reduced
throughput. Also, in multicast communication protocols ARQ might be highly inefficient because
of uncorrelated losses at different (groups of) receivers.
In these cases, Forward Error Correction possibly combined with ARQ,
become useful: the sender prevents losses by transmitting some amount of redundant informa-
tion, which allow the reconstruction of missing data at the receiver without further interactions.
Besides reducing the time needed to recover the missing packets, such an approach generally
simplifies both the sender and the receiver since it might render a feedback channel unnecessary;
also, the technique is attractive for multicast applications since different loss patterns can be
recovered from using the same set of transmitted data.
FEC techniques are generally based on the use of error detection and correction codes. These
codes have been studied for a long time and are widely used in many fields of information process-
ing, particularly in telecommunications systems. In the context of computer communications,
error detection is generally provided by the lower protocol layers which use checksums (e.g.
Cyclic Redundancy Checksums (CRCs)) to discard corrupted packets. Error correcting codes
are also used in special cases, e.g. in modems, wireless or otherwise noisy links, in order to make
the residual error rate comparable to that of dedicated, wired connections. After such link layer
processing, the upper protocol layers have mainly to deal with erasures, i.e. missing packets
in a stream. Erasures originate from uncorrectable errors at the link layer (but those are not
frequent with properly designed and working hardware), or, more frequently, from congestion in
the network which causes otherwise valid packets to be dropped due to lack of buffers. Erasures
are easier to deal with than errors since the exact position of missing data is known.
Recently, many applications have been developed which use multicast communication. Some
of these applications, e.g. audio or videoconferencing tools, tolerate segment losses with a relatively
graceful degradation of performance, since data blocks are often independent of each
other and have a limited lifetime. Others, such as electronic whiteboards or diffusion of circular
information over the network ("electronic newspapers", distribution of software, etc), have instead
more strict requirements and require reliable delivery of all data. Thus, they would greatly
benefit from an increased reliability in the communication.
Despite an increased need, and a general consensus on their usefulness [4, 10, 14, 19] there
are very few Internet protocols which use FEC techniques. This is possibly due to the existence
of a gap between the telecommunications world, where FEC techniques have been first studied
and developed, and the computer communications world. In the former, the interest is focused
on error correcting codes, operating on relatively short strings of bits and implemented on
dedicated hardware; in the latter, erasure codes are needed, which must be able to operate
on packet-sized data objects, and need to be implemented efficiently in software using general-purpose
processors.
In this paper we try to fill this gap by providing a basic description of the principles of
operation of erasure codes, presenting an erasure code which is easy to understand, flexible and
efficient to implement even on inexpensive architectures, and discussing various issues related
to its performance and possible applications. The paper is structured as follows: Section 2
gives a brief introduction to the principles of operation of erasure codes. Section 3 describes
our code and discusses some issues related to its implementation on general purpose processors.
Finally, Section 4 briefly shows a number of possible applications in computer communication
protocols, both in unicast and multicast protocols. A portable C implementation of the erasure
code described in this paper is available from the author [16].
An introduction to erasure codes
In this section we give a brief introduction to the principle of operation of erasure codes. For a
more in-depth discussion of the problem the interested reader is referred to the copious literature
on the subject [1, 11, 15, 20]. In this paper we only deal with the so-called linear block codes as
they are simple and appropriate for the applications of our interest.
The key idea behind erasure codes is that k blocks of source data are encoded at the sender
to produce n blocks of encoded data, in such a way that any subset of k encoded blocks suffices
to reconstruct the source data. Such a code is called an (n; code and allows the receiver
to recover from up to n \Gamma k losses in a group of n encoded blocks. Figure 1 gives a graphical
representation of the encoding and decoding process.
Encoder
source data
Decoder
reconstructed
data
encoded data received data
Figure
1: A graphical representation of the encoding/decoding process.
Within the telecommunications world, a block is usually made of a small number of bits. In
computer communications, the "quantum" of information is generally much larger - one packet
of data, often amounting to hundreds or thousands of bits. This changes somewhat the way an
erasure code can be implemented. However, in the following discussion we will assume that a
block is a single data item which can be operated on with simple arithmetic operations. Large
packets can be split into multiple data items, and the encoding/decoding process is applied by
taking one data item per packet.
An interesting class of erasure codes is that of linear codes, so called because they can be
analyzed using the properties of linear algebra. Let be the source data, G an
n \Theta k matrix, then an (n; linear code can be represented by
for a proper definition of the matrix G. Assuming that k components of y are available at the
receiver, source data can be reconstructed by using the k equations corresponding to the known
components of y. We call G 0 the k \Theta k matrix representing these equations (Figure 2). This of
course is only possible if these equations are linearly independent, and, in the general case, this
holds if any k \Theta k matrix extracted from G is invertible.
If the encoded blocks include a verbatim copy of the source blocks, the code is called a
systematic code. This corresponds to including the identity matrix I k in G. The advantage of
a systematic code is that it simplifies the reconstruction of the source data in case one expects
very few losses.
G
n0Decoder
Encoder
G
Figure
2: The encoding/decoding process in matrix form, for a systematic code (the top k rows
of G constitute the identity matrix I k ). y 0 and G 0 correspond to the grey areas of the vector
and matrix on the right.
2.1 The generator matrix
G is called the generator matrix of the code, because any valid y is a linear combination of
columns of G. Since G is an n \Theta k matrix with rank k, any subset of k encoded blocks should
convey information on all the k source blocks. As a consequence, each column of G can have
at most k \Gamma 1 zero elements. In the case of a systematic code G contains the identity matrix
I k , which consumes all zero elements. Thus the remaining rows of the matrix must all contain
non-zero elements.
Strictly speaking, the reconstruction process needs some additional information - namely,
the identity of the various blocks - to reconstruct the source data. However, this information is
generally derived by other means and thus might not need to be transmitted explicitly. Also,
in the case of computer communications, this additional information has a negligible size when
compared to the size of a packet.
There is however another source of overhead which cannot be neglected, and this is the
precision used for computations. If each x i is represented using b bits, representing the y i 's
requires more bits if ordinary arithmetic is used. In fact, if each coefficient g ij of G is represented
on b 0 bits, the y i 's need b+b bits to be represented without loss of precision. That is a
significant overhead, since those excess bits must be transmitted to reconstruct the source data.
Rounding or truncating the representation of the y i 's would prevent a correct reconstruction of
the source data.
2.2 Avoiding roundings: computations in finite fields
Luckily the expansion of data can be overcome by working in a finite field. Roughly speaking,
a field is a set in which we can add, subtract, multiply and divide, in much the same way we
are used to work on integers (the interested reader is referred to some textbook on algebra [6]
or coding theory (e.g. [1, Ch.2 and Ch.4]), where a more formal presentation of finite fields is
provided; a relatively simple-to-follow presentation is also given in [2, Chap.2]). A field is closed
under addition and multiplication, which means that the result of sums and products of field
elements are still field elements. A finite field is characterized by having a finite number of
elements. Most of the properties of linear algebra apply to finite fields as well.
The main advantage of using a finite field, for our purposes, lies in the closure property
which allows us to make exact computations on field elements without requiring more bits to
represent the results. In order to work on a finite field, we need to map our data elements into
field elements, operate upon them according to the rules of the field, and then apply the inverse
mapping to reconstruct the desired results.
2.2.1 Prime fields
Finite fields have been shown to exist with is a prime number. Fields
with p elements, with p prime, are called prime fields or GF (p), where GF stands for Galois
Field. Operating in a prime field is relatively simple, since GF (p) is the set of integers from 0 to
under the operations of addition and multiplication modulo p. From the point of view of
a software implementation, there are two minor difficulties in using a prime field: first, with the
exception of bits to be represented. This causes a
slight inefficiency in the encoding of data, and possibly an even larger inefficiency in operating
on these numbers since the operand sizes might not match the word size of the processor. The
second problem lies in the need of a modulo operation on sums and, especially, multiplications.
The modulo is an expensive operation since it requires a division. Both problems, though, can
be minimized if
2.2.2 Extension fields
Fields with prime and r ? 1, are called extension fields or GF (p r ).
The sum and product in extension fields are not done by taking results modulo q. Rather, field
elements can be considered as polynomials of degree r \Gamma 1 with coefficients in GF (p). The sum
operation is just the sum between coefficients, modulo p; the product is the product between
polynomials, computed modulo an irreducible polynomial (i.e. one without divisors in GF (p r
of degree r, and with coefficients reduced modulo p.
Despite the apparent complexity, operations on extension fields can become extremely simple
in the case of 2. In this case, elements of GF(2 r ) require exactly r bits to be represented, a
property which simplifies the handling of data. Sum and subtraction become the same operation
(a bit-by-bit sum modulo 2), which is simply implemented with an exclusive OR.
2.2.3 Multiplications and divisions
An interesting property of prime or extension fields is that there exist at least one special
element, usually denoted by ff, whose powers generate all non-zero elements of the field. As an
example, a generator for GF (5) is 2, whose powers (starting from 2 0 ) are
of ff repeat with a period of length
This property has a direct consequence on the implementation of multiplication and division.
In fact, we can express any non-zero field element x as can be considered as
"logarithm" of x, and multiplication and division can be computed using logarithms, as follows:
where jaj b stands for "a modulo b". If the number of field elements not too large, tables can be
built off line to provide the "logarithm", the "exponential" and the multiplicative inverse of each
non-zero field element. In some cases, it can be convenient to provide a table for multiplications
as well. Using the above techniques, operations in extension fields with can be extremely
fast and simple to implement.
2.3 Data recovery
Recovery of original data is possible by solving the linear system
where x is the source data and y 0 is a subset of k components of y available at the receiver.
Matrix G 0 is the subset of rows from G corresponding to the components of y 0 .
It is useful to solve the problem in two steps: first G 0 is inverted, then
This is because the cost of matrix inversion can be amortized over all the elements which are
contained in a packet, becoming negligible in many cases.
The inversion of G 0 can be done with the usual techniques, by replacing division with multiplication
by the inverse field element. The cost of inversion is O(kl 2 ), where l -
is the number of data blocks which must be recovered (very small constants are involved in our
use of the O() notation).
Reconstructing the l missing data blocks has a total cost of O(lk) operations. Provided
sufficient resources, it is not impossible to reconstruct the missing data in constant time, although
this would be pointless since just receiving the data requires O(k) time. Many implementations
of error correcting codes use dedicated hardware (either hardwired, or in the form of a dedicated
processor) to perform data reconstruction with the required speed.
3 An erasure code based on Vandermonde matrices
A simple yet effective way to build the generator matrix, G, consists in using coefficients of the
where the x i 's are elements of GF (p r ). Such matrices are commonly known as Vandermonde
matrices, and their determinant is
Y
If all x i 's are different, the matrix has a non-null determinant and it is invertible. Provided
can be constructed, which satisfy the properties required
for G. Such matrices can be extended with the identity matrix I k to obtain a suitable generator
for a systematic code.
Note that there are some special cases of the above code which are of trivial implementation.
As an example, an (n; 1) code simply requires the same data to be retransmitted multiple times,
hence there is no overhead involved in the encoding. Another simple case is that of a systematic
code, where the only redundant block is simply the sum (as defined in GF (p r )) of
the k source data blocks, i.e. a simple XOR in case 2. Unfortunately, an (n; 1) code has a
low rate and is relatively inefficient compared to codes with higher values of k. Conversely, a
code is only useful for small amount of losses. So, in many cases there is a real need
for codes with k ? 1 and
We have written a portable C implementation of the above code [16] to determine its performance
when used within network protocols. Our code supports any r in the range
arbitrary packet sizes. The maximum efficiency can be achieved using
this allows most operations to be executed using table lookups. The generator matrix has the
form indicated above, with x . We can build up to 2 rows in this way, which makes
it possible to construct codes up to In our experiments we have used
a packet size of 1024 bytes.
3.1 Performance
Using a systematic code, the encoder takes groups of k source data blocks to produce
redundant blocks. This means that every source data block is used times, and we can
expect the encoding time to be a linear function of n \Gamma k. It is probably more practical to
measure the time to produce a single data block, which depends on the single parameter k. It
is easy to derive that this time is (for sufficiently large packets) linearly dependent on k, hence
we can approximate it as
encoding
c e
where the constant c e depends on the speed of the system. The above relation only tells us how
fast we can build redundant packets. If we use a systematic code, sending k blocks of source
data requires the actual computation of blocks. Thus, the actual encoding
speed becomes
encoding speed = c e
Note that the maximum loss rate that we can sustain is n\Gammak
n , which means that, for a given
maximum loss rate, the encoding speed also decreases with n.
Decoding costs depend on l - min(k; n \Gamma k), the actual number of missing source blocks.
Although matrix inversion has a cost O(kl 2 ), this cost is amortized over the size s of a packet;
we have found that, for reasonably sized packets (say above 256 bytes), and k up to 32, the cost
of matrix inversion becomes negligible compared to the cost of packet reconstruction, which is
O(lk). Also for the reconstruction process it is more practical to measure the overall cost per
reconstructed block, which is similar to the encoding cost. Then, the decoding speed can be
written as
decoding speed = c d
l
with the constant c d slightly smaller than c e because of some additional overheads (including
the already mentioned matrix inversion).
The accuracy of the above approximations has been tested on our implementation using
a packet size of 1024 bytes, and different values of k and l shown in Table 1
(more detailed performance data can be found in [17]). Running times have been determined
using a Pentium 133 running FreeBSD, using our code compiled with gcc -O2 and no special
optimizations.
These experimental results show that the approximation is sufficiently accurate. Also, the
values of c e and c d are sufficiently high to allow these codes to be used in a wide range of
applications, depending on the actual values of k and l k. The reader will notice that, for
a given k, larger values of l (which we have set equal to n \Gamma slightly better performance
both in encoding and decoding. On the encoder side this is exclusively due to the effect of
caching: since the same source data are used several times to compute multiple redundant blocks,
successive computations find the operands already in cache hence running slightly faster. For
the decoder, this derives from the amortization of matrix inversion costs over a larger number
Encoding Decoding
-s MB/s -s MB/s
28 3533 9.06
Table
1: Encoding/decoding times for different values of k and n \Gamma k on a Pentium 133 running
FreeBSD
of reconstructed blocks 2 .
Note that in many cases data exchanged over a network connection are already subject
to a small number of copies (e.g. from kernel to user space) and accesses to compute check-
sums. Thus, part of the overhead for reconstructing missing data might be amortized by using
integrated layer processing techniques [3].
3.2 Discussion
The above results show that a software implementation of erasure codes is computationally
expensive, but on today's machines they can be safely afforded with little overhead for low-to-
medium speed applications, up to the 100 KB/s range. This covers a wide range of real-time
applications including network whiteboards and audio/video conferencing tools, and can even
be used to support browsing-type applications. More bandwidth-intensive applications can still
make good use of software FEC techniques, with a careful tuning of operating parameters
(specifically, our discussion) or provided sufficient processing power is available. The
current trend of increasing processing speeds, and the availability of Symmetric MultiProcessor
(SMP) desktop computers suggest that, as time goes by, there will likely be plenty of processing
power to support these computations (we have measured values for c d and c e in the 30MB/s
range on faster machines based on PentiumPRO 200 and UltraSparc processors). Finally, note
that in many cases both encoding and decoding can be done offline, so many non-real-time
application can use this feature and apply FEC techniques while communicating at much higher
speeds than their encoding/decoding ability.
2 and a small overhead existing in our implementation for non reconstructed blocks which are still copied in
the reconstruction process
Applications
Depending on the application, ARQ and FEC can be used separately or together, and in the
latter case either on different layers or in a combined fashion. In general, there is a tradeoff
between the improved reliability of FEC-based protocols and their higher computational costs,
and this tradeoff often dictates the choice.
It is beyond the scope of this paper to make an in-depth analysis of the relative advantages
of FEC, ARQ or combinations thereof. Such studies are present in some papers in the literature
(see, for example, [7, 12, 21]). In this section we limit our interest to computer networks, and
present a partial list of applications which could benefit from the use of an encoding technique
such as the one described in this paper. The bandwidth, reliability and congestion control
requirements of these applications vary widely.
Losses in computer networks mainly depend on congestion, and congestion is the network
analogue of noise (or interference) in telecommunications systems. Hence, FEC techniques based
on a redundant encoding give us similar types of advantages, namely increased resilience to noise
and interference. Depending on the amount of redundancy, the residual packet loss rate can be
made arbitrarily small, to the point that reliable transfers can be achieved without the need for
a feedback channel. Or, one might just be interested in a reduction of the residual loss rate, so
that performance is generally improved but feedback from the receiver is still needed.
4.1 Unicast applications
In unicast applications, reducing the amount of feedback necessary for reliable delivery is generally
useful to overcome the high delays incurred with ARQ techniques in the presence of long
delay paths. Also, these techniques can be used in the presence of asymmetrical communication
links. Two examples are the following:
ffl forward error recovery on long delay paths. TCP communications over long fat pipes
suffer badly from random packet losses because of the time needed to get feedback from
the receiver. Selective acknowledgements [13] can help improve the situation but only
after the transmit window has opened wide enough, which is generally not true during
connection startup and/or after an even short sequence of lost packets. To overcome this
problem it might be useful to allocate (possibly adaptively, depending on the actual loss
rate) a small fraction of the bandwidth to send redundant packets. The sender could
compute a small number (1-2) of redundant packets on every group of k packets, and
send these packets at the end of the group. In case of a single or double packet loss the
receiver could defer the transmission of the dup ack until the expiration of a (possibly
fast) timeout 3 . If, by that time, the group is complete and some of the redundant packets
are available, then the missing one(s) can be recovered without the need for an explicit
retransmission (this this would be equivalent to a fast retransmit). Otherwise, the usual
congestion avoidance techniques can be adopted. A variant of RFC1323 timestamps[5]
3 alternatively, the sender could delay retransmissions in the hope that the lost packet can be recovered using
the redundant packets.
can be used to assign sequence numbers to packets thus allowing the receiver to determine
the identity of received packets and perform the reconstruction process (TCP sequence
numbers are not adequate for the purpose).
ffl power saving in communication with mobile equipment Mobile devices usually
adopt wireless communication and have a limited power budget. This results in the need
to reduce the number of transmissions. A redundant encoding of data can practically
remove the need for acknowledgements while still allowing for reliable communications. As
an example, a mobile browser can limit its transmissions to requests only, while incoming
responses need not to be explicitly ACKed (such as it is done currently with HTTP over
TCP) unless severe losses occur.
4.2 Multicast applications
The main field of application of redundant encoding is probably in multicast applications. Here,
multiple receivers can experience losses on different packets, and insuring reliability via individual
repairs might become extremely expensive. A second advantage derives from the aforementioned
reduced need for handling a feedback channel from receivers. Reducing the amount of feedback
is an extremely useful feature since it allows protocols to scale well to large numbers of receivers.
Applications not depending on a reliable delivery can still benefit from a redundant en-
coding, because an improved reliability in the transmission allows for more aggressive coding
techniques (e.g. compression) which in turn might result in a more effective usage of the available
bandwidth.
A list of multicast applications which would benefit from the use of a redundant encoding
follows.
videoconferencing tools. A redundant encoding with small values of k and
can provide an effective protection against losses in videoconferencing applications. By
reducing the effective loss rate one can even use a more efficient encoding technique (e.g.
fewer "I" frames in MPEG video) which provide a further reduction in the bandwidth.
The PET [9] group at Berkeley has done something similar for MPEG video.
reliable multicast for groupware. A redundant encoding can be used to greatly reduce
the need for retransmissions ("repairs") in applications needing a reliable multicast. One
such example is given by the "network whiteboard" type of applications, where reliable
transfer is needed for objects such as Postscript files or compound drawings.
ffl one-to-many file transfer on LANs. Classrooms using workstations often use this
pattern of access to files, either in the booting process (all nodes download the kernel or
startup files from a server) or during classes (where students download almost simultaneously
the same documents or applications from a centralized server). While these problems
can be partly overcome by preloading the software, centralized management is much more
convenient and the use of a multicast-FTP type of application can make the system much
more scalable.
ffl one-to-many file transfer on Wide Area Networks. There are several examples
of such an application. Some popular Web servers are likely to have many simultaneous
transfers of the same, large, piece of information (e.g. popular software packages). The
same applies to, say, a newspaper which is distributed electronically over the network, or
video-on-demand type of applications. Unlike local area multicast-FTP, receivers connect
to the server at different times, and have different bandwidths and loss rates, and significant
congestion control issues exist [8]. By using the encoding presented here, source data can be
encoded and transmitted with a very large redundancy (n ?? k). Using such a technique,
a receiver basically needs only to collect a sufficient number (k) of packets per block to
reconstruct the original file. The RMDP protocol [18] has been designed and implemented
by the author using the above technique.
Acknowledgements
The author wishes to thank Phil Karn for discussions which led to the development of the code
described in this paper, and an anonymous referee for comments on an early version of this
paper.
--R
"Theory and Practice of Error Control Codes"
"Fast Algorithms for Digital Signal Processing"
"Architectural Considerations for a New Generation of Proto- cols"
"The Case for packet level FEC"
"RFC1323: TCP Extensions for High Performance"
"Algebra"
"Delay Bounded Type-II Hybrid ARQ for Video Transmission over Wireless Networks"
"Receiver-driven Layered Multicast"
"Priority Encoding Transmission"
"Reliable Broadband Communication Using A Burst Erasure Correcting Code"
"Error Control Coding: Fundamentals and Applications"
"Automatic-repeat-request error-control schemes"
"RFC2018: TCP Selective Acknowledgement Option"
"Reliable Multicast: Where to use Forward
"Introduction to Error-Correcting Codes"
Sources for an erasure code based on Vandermonde matrices.
"On the feasibility of software FEC"
"A Reliable Multicast data Distribution Protocol based on software FEC techniques"
"Packet recovery in high-speed networks using coding and buffer management"
"Introduction to Coding Theory"
"A modified selective-repeat type-II hybrid ARQ system and its performance analysis"
--TR
--CTR
Antonio Vilei , Gabriella Convertino , Silvio Oliva , Roberto Cuppone, A novel unbalanced multiple description scheme for video transmission over WLAN, Proceedings of the 3rd ACM international workshop on Wireless mobile applications and services on WLAN hotspots, September 02-02, 2005, Cologne, Germany
Philip K. McKinley , Suraj Gaurav, Experimental evaluation of forward error correction on multicast audio streams in wireless LANs, Proceedings of the eighth ACM international conference on Multimedia, p.416-418, October 2000, Marina del Rey, California, United States
Temporally enhanced erasure codes for reliable communication protocols, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.38 n.6, p.713-730, 22 April 2002
Yoav Nebat , Moshe Sidi, Parallel downloads for streaming applications: a resequencing analysis, Performance Evaluation, v.63 n.1, p.15-35, January 2006
Azzedine Boukerche , Dawei Ning , Regina B. Araujo, UARTP: a unicast--based self--adaptive reliable transmission protocol for wireless and mobile ad-hoc networks, Proceedings of the 2nd ACM international workshop on Performance evaluation of wireless ad hoc, sensor, and ubiquitous networks, October 10-13, 2005, Montreal, Quebec, Canada
Philip K. McKinley , Chiping Tang , Arun P. Mani, A Study of Adaptive Forward Error Correction for Wireless Collaborative Computing, IEEE Transactions on Parallel and Distributed Systems, v.13 n.9, p.936-947, September 2002
Colin Allison , Duncan McPherson , Dirk Husemann, New channels, old concerns: scalable and reliable data dissemination, Proceedings of the 9th workshop on ACM SIGOPS European workshop: beyond the PC: new challenges for the operating system, September 17-20, 2000, Kolding, Denmark
Scott Atchley , Stephen Soltesz , James S. Plank , Micah Beck, Video IBPster, Future Generation Computer Systems, v.19 n.6, p.861-870, August
Longshe Huo , Wen Gao , Qingming Huang, Robust real-time transmission of scalable multimedia for heterogeneous client bandwidths, Real-Time Imaging, v.11 n.4, p.300-309, August 2005
Jrg Nonnenmacher , Ernst Biersack , Don Towsley, Parity-based loss recovery for reliable multicast transmission, ACM SIGCOMM Computer Communication Review, v.27 n.4, p.289-300, Oct. 1997
Jrg Nonnenmacher , Ernst W. Biersack , Don Towsley, Parity-based loss recovery for reliable multicast transmission, IEEE/ACM Transactions on Networking (TON), v.6 n.4, p.349-361, Aug. 1998
Roger G. Kermode, Scoped hybrid automatic repeat reQuest with forward error correction (SHARQFEC), ACM SIGCOMM Computer Communication Review, v.28 n.4, p.278-289, Oct. 1998
Mikkel Thorup , Yin Zhang, Tabulation based 4-universal hashing with applications to second moment estimation, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Patrick G. Bridges , Gary T. Wong , Matti Hiltunen , Richard D. Schlichting , Matthew J. Barrick, A configurable and extensible transport protocol, IEEE/ACM Transactions on Networking (TON), v.15 n.6, p.1254-1265, December 2007
Jess Bisbal , Betty H. C. Cheng, Resource-based approach to feature interaction in adaptive software, Proceedings of the 1st ACM SIGSOFT workshop on Self-managed systems, p.23-27, October 31-November 01, 2004, Newport Beach, California
Pl Halvorsen , Thomas Plagemann , Vera Goebel, Improving the I/O performance of intermediate multimedia storage nodes, Multimedia Systems, v.9 n.1, p.56-67, July
Micah Adler, Trade-offs in probabilistic packet marking for IP traceback, Journal of the ACM (JACM), v.52 n.2, p.217-244, March 2005
X. Brian Zhang , Simon S. Lam , Dong-Young Lee, Group rekeying with limited unicast recovery, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.6, p.855-870, 22 April 2004
Micah Adler, Tradeoffs in probabilistic packet marking for IP traceback, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Yang Richard Yang , X. Steve Li , X. Brian Zhang , Simon S. Lam, Reliable group rekeying: a performance analysis, ACM SIGCOMM Computer Communication Review, v.31 n.4, p.27-38, October 2001
Luigi Rizzo , Lorenzo Vicisano, RMDP: an FEC-based reliable multicast protocol for wireless environments, ACM SIGMOBILE Mobile Computing and Communications Review, v.2 n.2, p.23-31, April 1998
Luigi Rizzo, Dummynet and forward error correction, Proceedings of the Annual Technical Conference on USENIX Annual Technical Conference, 1998, p.31-31, June 15-19, 1998, New Orleans, Louisiana
Christoph Neumann , Vincent Roca , Aurlien Francillon , David Furodet, Impacts of packet scheduling and packet loss distribution on FEC Performances: observations and recommendations, Proceedings of the 2005 ACM conference on Emerging network experiment and technology, October 24-27, 2005, Toulouse, France
Shang-Ming Chang , Shiuhpyng Shieh , Warren W. Lin , Chih-Ming Hsieh, An efficient broadcast authentication scheme in wireless sensor networks, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan
Trista Pei-chun Chen , Tsuhan Chen, Fine-grained rate shaping for video streaming over wireless networks, EURASIP Journal on Applied Signal Processing, v.2004 n.1, p.176-191, 1 January 2004
Sinan Isik , Mehmet Yunus Donmez , Cem Ersoy, Itinerant delivery of popular data via WIDE hot spots, Mobile Networks and Applications, v.11 n.2, p.297-307, April 2006
Niklas Carlsson , Derek L. Eager , Mary K. Vernon, Multicast protocols for scalable on-demand download, Performance Evaluation, v.63 n.9, p.864-891, October 2006
Vincent Roca, On the use of on-demand layer addition (ODL) with mutli-layer multicast transmission techniques, Proceedings of NGC 2000 on Networked group communication, p.93-101, November 08-10, 2000, Palo Alto, California, United States
Martin S. Lacher , Jrg Nonnenmacher , Ernst W. Biersack, Performance comparison of centralized versus distributed error recovery for reliable multicast, IEEE/ACM Transactions on Networking (TON), v.8 n.2, p.224-238, April 2000
X. Brian Zhang , Simon S. Lam , Dong-Young Lee , Y. Richard Yang, Protocol design for scalable and reliable group rekeying, IEEE/ACM Transactions on Networking (TON), v.11 n.6, p.908-922, 01 December
Jeong-Yong Choi , Jitae Shin, Cross-layer error-control with low-overhead ARQ for H.264 video transmission over wireless LANs, Computer Communications, v.30 n.7, p.1476-1486, May, 2007
Luigi Rizzo, pgmcc: a TCP-friendly single-rate multicast congestion control scheme, ACM SIGCOMM Computer Communication Review, v.30 n.4, p.17-28, October 2000
V. R. Syrotiuk , M. Cui , S. Ramkumar , C. J. Colbourn, Dynamic spectrum utilization in ad hoc networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.46 n.5, p.665-678, 5 December 2004
David Gotz, Scalable and adaptive streaming for non-linear media, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
John W. Byers , Gu-In Kwon , Michael Luby , Michael Mitzenmacher, Fine-grained layered multicast with STAIR, IEEE/ACM Transactions on Networking (TON), v.14 n.1, p.81-93, February 2006
Hagit Attiya , Hadas Shachnai, Tight bounds for FEC-based reliable multicast, Information and Computation, v.190 n.2, p.117-135, 1 May 2004
Chadi Barakat , Eitan Altman, Bandwidth tradeoff between TCP and link-level FEC, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.39 n.2, p.133-150, 5 June 2002
Chadi Barakat , Eitan Altman, Bandwidth tradeoff between TCP and link-level FEC, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.39 n.5, p.133-150, 5 June 2002
Amos Beimel , Shlomi Dolev , Noam Singer, RT oblivious erasure correcting, IEEE/ACM Transactions on Networking (TON), v.15 n.6, p.1321-1332, December 2007
Petros Zerfos , Gary Zhong , Jerry Cheng , Haiyun Luo , Songwu Lu , Jefferey Jia-Ru Li, DIRAC: a software-based wireless router system, Proceedings of the 9th annual international conference on Mobile computing and networking, September 14-19, 2003, San Diego, CA, USA
F. A. Samimi , P. K. McKinley , S. M. Sadjadi , P. Ge, Kernel-middleware interaction to support adaptation in pervasive computing environments, Proceedings of the 2nd workshop on Middleware for pervasive and ad-hoc computing, p.140-145, October 18-22, 2004, Toronto, Ontario, Canada
John W. Byers , Michael Luby , Michael Mitzenmacher , Ashutosh Rege, A digital fountain approach to reliable distribution of bulk data, ACM SIGCOMM Computer Communication Review, v.28 n.4, p.56-67, Oct. 1998
Haitao Zheng , Jill Boyce, Streaming video over wireless networks, Wireless internet handbook: technologies, standards, and application, CRC Press, Inc., Boca Raton, FL,
Combined wavelet video coding and error control for internet streaming and multicast, EURASIP Journal on Applied Signal Processing, v.2003 n.1, p.66-80, January
Jen-Wen Ding , Sheng-Yuan Tseng , Yueh-Min Huang, Packet Permutation: A Robust Transmission Technique for Continuous Media Streaming Over the Internet, Multimedia Tools and Applications, v.21 n.3, p.281-305, December
Christoph Neumann , Vincent Roca , Rod Walsh, Large scale content distribution protocols, ACM SIGCOMM Computer Communication Review, v.35 n.5, October 2005
Dan Rubenstein , Sneha Kasera , Don Towsley , Jim Kurose, Improving reliable multicast using active parity encoding services, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.44 n.1, p.63-78, 15 January 2004
Anirban Mahanti , Derek L. Eager , Mary K. Vernon , David Sundaram-Stukel, Scalable on-demand media streaming with packet loss recovery, ACM SIGCOMM Computer Communication Review, v.31 n.4, p.97-108, October 2001
Shengjie Zhao , Zixiang Xiong , Xiaodong Wang, Optimal Resource Allocation for Wireless Video over CDMA Networks, IEEE Transactions on Mobile Computing, v.4 n.1, p.56-67, January 2005
Anirban Mahanti , Derek L. Eager , Mary K. Vernon , David J. Sundaram-Stukel, Scalable on-demand media streaming with packet loss recovery, IEEE/ACM Transactions on Networking (TON), v.11 n.2, p.195-209, April
Rajesh Krishnan , James P. G. Sterbenz , Wesley M. Eddy , Craig Partridge , Mark Allman, Explicit transport error notification (ETEN) for error-prone wireless and satellite networks, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.46 n.3, p.343-362, 22 October 2004
Patrick McDaniel , Atul Prakash, Enforcing provisioning and authorization policy in the Antigone system, Journal of Computer Security, v.14 n.6, p.483-511, November 2006
Amitanand S. Aiyer , Lorenzo Alvisi , Allen Clement , Mike Dahlin , Jean-Philippe Martin , Carl Porth, BAR fault tolerance for cooperative services, ACM SIGOPS Operating Systems Review, v.39 n.5, December 2005
Ramakrishna Kotla , Lorenzo Alvisi , Mike Dahlin, SafeStore: a durable and practical storage system, 2007 USENIX Annual Technical Conference on Proceedings of the USENIX Annual Technical Conference, p.1-14, June 17-22, 2007, Santa Clara, CA
B. Baurens, Groupware, Cooperative environments for distributed: the distributed systems environment report, Springer-Verlag New York, Inc., New York, NY, 2002 | FEC;reliable multicast;erasure codes |
263883 | Possibilities of using protocol converters for NIR system construction. | Volumes of information available from network information services have been increasing considerably in recent years. Users' satisfaction with an information service depends very much on the quality of the network information retrieval (NIR) system used to retrieve information. The construction of such a system involves two major development areas: user interface design and the implementation of NIR protocols.In this paper we describe and discuss the possibilities of using formal methods of protocol converter design to construct the part of an NIR system client that deals with network communication. If this approach is practicable it can make implementation of NIR protocols more reliable and amenable to automation than traditional designs using general purpose programming languages. This will enable easy implementation of new NIR protocols custom-tailored to specialized NIR services, while the user interface remains the same for all these services.Based on a simple example of implementing the Gopher protocol client we conclude that the known formal methods of protocol converter design are generally not directly applicable for our approach. However, they could be used under certain circumstances when supplemented with other techniques which we propose in the discussion. | Introduction
Important terms used in this paper are illustrated in
Fig. 1. An information provider makes some information
available to a user through a network by means of
a network information retrieval (NIR) system, thereby
server
client
user information
provider
NIR system
NIR protocol
information service
Figure
1: Network information retrieval (NIR) system.
providing an information service. An NIR system consists
of the provider's part (the server) and the user's
part (the client) communicating via an (NIR) protocol.
We use the term information retrieval to denote passing
information, (a document or other form of informa-
tion), from a repository to a user or from a user to a
repository in general. We concentrate, however, on passing
documents from a repository to a user. Some authors
use the term information retrieval to denote a form of
getting information from a repository where searching or
some more intelligent query processing is involved [9, 15]
or distinguish between document retrieval and information
retrieval (retrieval of documents and information in
other forms) [14].
Usually, a user wants to use a lot of information services
available in order to get as much information relevant to
his or her needs as possible. At the same time, a user
wants to use all information services in a similar way.
Good consistency of user interfaces for accessing various
information services and for presenting documents of
various types will decrease the cognition overhead - the
additional mental effort that a user has to make concerning
manipulation with a user interface, navigation, and
orientation [25]. Consistency can be improved by applying
standards for "look and feel" of user interface ele-
ments, by using gateways between information services,
and, most importantly, by using integrated NIR tools.
An integrated NIR tool is a multiprotocol client designed
to communicate directly with servers of various
information services. This approach has become very
popular recently, since it provides a more consistent environment
than a set of individual clients and does not have
the disadvantages of gateways in the matter of network
communication overhead and limited availability.
Nevertheless, current integrated NIR tools often suffer
from important drawbacks:
ffl They support a limited number of NIR protocols and
adding a new protocol is not straightforward.
ffl Implementation of NIR protocols is not performed
systematically, which means that it is not possible
to make any formal verification of such a system.
ffl Their portability is limited, and porting to another
type of a window system is laborious.
Constructing such tools involves two major development
areas: user interface design and implementation
of NIR protocols. In the next two chapters we will
briefly describe some of the techniques and formalisms
used in these development areas. Then we will mention
the role of protocol conversion in network communication
and shortly describe three formal methods of protocol
converter design. In the rest of the paper, we will
demonstrate the application of the protocol conversion
techniques to a simple protocol, the Gopher protocol.
Despite the acceptance of these techniques, our analysis
demonstrates the negative conclusion that these techniques
alone are inadequate for a protocol even as simple
as the Gopher protocol. We conclude with some suggestions
for enhancements.
User Interface Design and Layered
Models
When designing a user interface various models and formal
description techniques are used. An important class
of models is formed by layered models in which communication
between a user and an application is divided into
several layers. Messages exchanged at higher layers are
conveyed by means of messages exchanged at lower layers.
An example of this kind of model is the seven-layer model
described by Nielsen in [16] shown in Fig. 2 along with
an indication of what kind of information is exchanged
at each layer.
Various formalisms can be used to describe the behaviour
of the system at individual layers. Tradition-
ally, most attention has been paid to the syntactic layer.
Among the formal description techniques used at this
layer the most popular are context-free grammars, finite-state
automata, and event-response languages. Taylor
[23, 24] proposed a layered model where different layers
are conceived as different levels of abstraction in com-
Physical communication6417532
Human
Goal layer
Task layer
Lexical layer
Physical layer
Semantic layer
Syntactic layer
Alphabetic layer
Computer
system
Virtual communication
bered line
Delete a num-
Delete six lines
of the edited text
Remove section
of my letter
Example
functionality
Tokens
Sentences
Lexemes
Detailed
Real-world
concepts
concepts
System
Physical
information
Depressing the
"D"-key
Exchanged
units
Figure
2: Seven-layer model for human-computer inter-action
munication between the user and the application. Lex-
ical, syntactic, and semantic characteristics are spread
within all levels. It appears that the layered structure is
natural to human-computer interaction and implementations
of user interfaces should take this observation into
consideration.
3 Network architectures and layered
models
OSI model The ISO Open System Interconnection
(OSI) model [6] divides a network architecture into seven
layers (Fig. 3). Each layer provides one or more services
to the layer above it. Within a layer, services are implemented
by means of protocols.
Many current network architectures are based on OSI
model layering. Although not all layers are always implemented
and functions of multiple layers can be grouped
into one layer, dividing communication functionality into
layers remains the basic principle.
Protocol specification Formal techniques for protocol
specification can be classified into three main cate-
gories: transition models, programming languages, and
combinations of the first two.
Transition models are motivated by the observation
that protocols consist largely of processing commands
from the user (from the higher level), message arrivals
(from the lower layer), and internal timeouts.
In this paper, we will use finite state machines (graph-
ically depicted as state-transition diagrams) which seem
Session layer protocols
Application layer protocols
Presentation
Transport layer protocols
Presentation layer protocols
Network layer protocols
Data link protocols
Physical layer protocols
Presentation
Application
Transport
Network
Data link
Physical
Application
Transport
Network
Data link
Physical
Figure
3: OSI model.
-d0, -ls
-tm
+ls
-d1, -ls
-d0, -ls
+ls
tm, +a0 tm, +a0
-tm
Figure
4: The alternating-bit (AB) protocol.
to be the most convenient specification technique for our
application domain.
An example of the state-transition diagram depicting
the alternating-bit (AB) protocol [1] is shown in Fig. 4.
Data and positive acknowledgment messages exchanged
between two entities are stamped with modulo 2 sequence
numbers. d0 and d1 denote data messages, a0 and a1
denote acknowledgment messages, a plus sign denotes receiving
a message from the other party, a minus sign denotes
sending a message to the other party, a double plus
sign (++) denotes getting a message from the user (ac-
ceptance), a double minus sign (\Gamma\Gamma) denotes putting a
message to the user (delivery), ls and tm denote loss of
a data message and timeout, respectively.
4 Protocol Converters
In network communication we can encounter the following
problem. Consider that we have two entities P 0 and
a) b)
Service Sq
Service S
Figure
5: a) Communicating entities, b) Using a protocol
converter to allow different entities to communicate.
communicating by means of a protocol P (thus providing
a service S p ) and other two entities Q 0 and Q 1
communicating by means of a protocol Q (thus providing
a service S q ), see Fig. 5 a. Then we might want to
make thus providing a service
S similar to services S p and S q . When protocols P and
are compatible enough, this can be achieved by a protocol
converter C, which translates messages sent by P 0
into messages of the protocol Q, forwards them to Q 1 ,
and performs a similar translation in the other direction,
see Fig. 5 b.
To solve the problem of constructing a protocol converter
C, given specifications of P 0 , Q 1 , and S, several
more or less formal methods have been developed. Most
of them accept specifications of protocol entities in the
form of communicating finite-state machines. We give a
brief description of the principles used by three important
methods.
Conversion via projection An image protocol may
be derived from a given protocol by partitioning the state
set of each communicating entity; states in the same block
of the partition are considered to be indistinguishable in
the image of that entity. Suppose protocols P and Q can
be projected onto the same image protocol, say R.
R embodies some of the functionality that is common to
both P and Q. The specification of a converter can be
derived considering that the projection mapping defines
an equivalence between messages of P and Q, just as it
does for states. Finding a common image protocol with
useful properties requires a heuristic search using intuitive
understanding of the protocols. For more details,
refer to [13, 5].
The conversion seed approach This approach was
first presented in [17]. From the service specification S,
a conversion seed X is (heuristically) constructed. The
seed X is a finite-state machine whose message set is a
subset of the union of the message sets of P 1 and Q
A
Figure
The quotient problem.
a partial specification of the converter's behaviour in the
form of constraints on the order in which messages may
be sent and received by the converter. Then a three-step
algorithm [5] is run on P 1 , Q 0 , and X. If a converter C is
produced (the algorithm generates a non-empty output),
the system comprising P should be analyzed.
If this system satisfies the specification S, then C is the
desired converter. Otherwise, a different iteration with a
different seed could be performed. Unfortunately, if the
algorithm fails to produce a converter, it is hard to decide
whether the problem was in the seed X used or if there
was a hard mismatch between the P and Q protocols.
The quotient approach Consider the problem depicted
in Fig. 6. Let A be a service specification and
let B specify one component of its implementation. The
goal is to specify another component C, which interacts
with B via its internal interface, so that the behaviour
observed at B's external interface implements the service
defined by A. This is called the quotient problem. It is
clear that Fig. 5 b depicts a form of the quotient problem:
correspond to B, S corresponds to A, and the
converter to be found is C. An algorithmic solution of the
quotient problem for a class of input specifications was
presented in [4, 5]. A similar problem has been discussed
in [12].
5 NIR System Design Proposal
5.1 Information Retrieval Cycle
A user wants to receive required information easily,
quickly, and in satisfactory quality. Although there are
many differences in details, the process of obtaining information
when using most NIR services can be outlined
by the information retrieval cycle shown in Fig. 7.
At each step there is an indication whose turn it is,
either user's (U) or system's (S).
First, the user has an idea about required information,
such as "I would like to get some article on X written by
Y". Next, the user has to choose an information service
and a source of information (a particular site which offers
the chosen information service) to exploit.
Then, the user has to formulate a query specifying the
required information in a form that the chosen informa-
Form an idea about required information
U
Choose a source of information
Choose an information service
Formulate a query
Find required information
Present information
Is presented information satisfactory ?
U
U
U
U
yes
no
Figure
7: Information retrieval cycle.
tion service understands and that is sufficient for it to
find the information.
Now, it is the computer system's task to find the information
and present it to the user. There are four possible
results:
ffl information was found and presented in satisfactory
quality
ffl information was found and presented, but the user
is not satisfied with it
ffl no information was found using the given description
ffl the given description was not in a form understandable
by the given information service
When some information was found and presented to
the user, but it is not to the user's satisfaction and he or
she believes that better information could be obtained,
the next iteration in the information retrieval cycle can
be exercised. According to the measure of the user's dissatisfaction
with the presented information, there are several
possible points to return to. In some cases, different
presentation of the same information would be satisfac-
tory. In other cases, the user has to reformulate the query
or choose another source of information or even another
information service.
Looking at Fig. 7, we can see that the user's role is
much easier when a NIR system logically integrates access
to various information services. This means that differences
between individual information services should be
diminished in all steps as much as possible.
server
client-
protocol converter
user
Figure
8: Client as a protocol converter.
menu, forms or
event-response
library
user
client
server
converter
Figure
9: Client as a library and a protocol converter.
5.2 Possibilities of Employing Protocol
Converters
A client part of a NIR system may be seen as communicating
via two sets of protocols. It uses one set of protocols
to communicate with the user (see Fig. 2) and another
set of protocols to communicate with the server
(see Fig. 3). This may suggest an idea to us: can the
client be constructed as a protocol converter (or a set of
protocol converters in the case of a multiprotocol client)
using some of the formal methods of protocol converter
design? This idea is illustrated in Fig. 8.
The first question that arises is: what are the protocols
which the converter should transform? In a typical en-
vironment, all lower layer protocols up to the transport
layer are the same for all NIR services supported by the
client. It is the upper layer NIR protocols that differ,
usually based on the exchange of textual messages over a
transport network connection (e.g., FTP, Gopher proto-
col, SMTP, NNTP, HTTP). These protocols seem to be
good candidates for the protocol on a server's side of the
converter.
In the case of a command language user interface, communication
with the user can also be regarded as a protocol
based on the exchange of textual messages. This
could be a protocol on the user's side of the converter.
If the user interface uses interaction techniques like
menu hierarchy, form filling or currently the most popular
direct manipulation, it may be implemented as a library
that offers an interface in the form of a protocol similar to
that of a command language. Again, a protocol converter
could be employed as shown in Fig. 9.
While the protocols on the server's side of the converter
are given by the information services we want to
support, the protocol on the other side is up to us to
specify (if it is not directly the user - client protocol as
in the configuration shown in Fig. 8). An important decision
is the choice of the proper level of communication.
Low level communication consisting of requests to display
user interface elements and responses about user interac-
user user interface
. NIR protocol n
converter 1
converter n server n
server 1
GIR protocol
Figure
10: NIR system based on the GIR protocol.
tion can ease the development of the user interface but
it would make the protocol converter construction more
difficult because of a great semantic gap between the two
protocols to be converted. We will try to find a level
of communication which makes it possible to use one of
the formal methods of protocol converter design that we
mentioned before.
5.3 General Information Retrieval Pro-
tocol
Many protocols for client-server communication in current
NIR services are similar to some extent. There are
common functions that can be identified in most of them
such as a request for an information object, sending the
requested information object, sending an error message
indicating that the requested information object cannot
be retrieved, a request to modify an existing or to create
a new information object, a request to search the contents
of an information object, etc. It seems feasible that
a high-level general information retrieval (GIR) protocol
providing a high-level (GIR) service can be designed.
Such a protocol has to support all major functions of
individual NIR services. It would work with a global abstract
information space formed by the union of information
spaces of individual NIR services. This protocol
operating on information objects from the global abstract
information space would be converted by a set of protocol
converters to particular NIR protocols operating on
information objects from information spaces of concrete
services. A structure of an NIR system built around
the GIR protocol is shown in Fig. 10.
Considering the structure of protocols used in current
NIR services and respecting that the behaviour of the entity
on the left side of the GIR protocol, the user interface,
corresponds to the information retrieval cycle depicted in
Fig. 7, we can propose a very simple GIR protocol depicted
by the state transition diagrams U 0 (the client)
and U 1 (the server) shown in Fig. 11. This version is certainly
far from being complete and needs to be improved
on the basis of later experience, see Chapter 7.
The notation used corresponds to that described in
Chapter 3. A dashed transition leading to state 1 matches
the first step in the information retrieval cycle (Fig. 7).
It represents a solely mental process with no interaction
over the network and will not be considered in further
discussion. A letter u in front of a message name means
that it is a GIR protocol message (u stands for universal).
Later we will use a letter g for Gopher protocol messages
in order to distinguish them.
The user chooses both an information service and a
source of information in one step. It may be divided into
two steps but one step better corresponds to picking up
a bookmark or entering a document URL. The choice of
information service would select a matched protocol con-
verter. Sending all information to the server is delayed in
the client until the user enters it completely. This allows
backtracking in user input without having to inform the
server about it.
5.4 General Window System Interface
We may consider another possible usage of protocol converters
in an NIR system. There is often a need to
port such a system to several platforms - window sys-
tems. Some developers try to make their applications
more portable by performing the following steps:
1. common functions of all considered window
systems.681302
U
session
session
++ u query
++ u new
query
- u query
- u presenting
- u response
++ u idea on required
- u inf. service
response
or error
information
service
++ u inf.
- u error
++ u end session
++ u new
or source
inf. service
Figure
11: General information retrieval protocol.
user
server
server
NeWS
converter 1
converter 2
NeWS protocol
general window
system interface
application
Figure
12: Using protocol converters to construct the
general window system interface.
2. Define interface to a general window system that implements
functions identified in step 1.
3. Implement the general window system on the top of
all considered window systems.
4. Implement the application using the general window
system.
Some window systems are based on the client-server
model (e.g., the X Window System or NeWS). We may
try to realize step 3 above with a set of protocol converters
converting client-server protocols of considered window
systems to the protocol used by the application to
communicate with the general window system defined in
step 2. This idea is depicted in Fig. 12.
Unfortunately, this idea is hardly feasible. The client-server
protocols of today's window systems are usually
complex and differ significantly from each other (and
from a possible protocol of the general window system).
Currently known methods of protocol converter design
are suitable for protocols that are sufficiently compati-
ble. Therefore, the general window system can be more
easily implemented as a set of shim libraries built on the
top of existing window systems (see Fig. 13). Examples
of systems that use this approach are stdwin [20] and
SUIT [18].
5.5 Structure of the Proposed System
The possible structure of a multi-service NIR system that
uses shim libraries to adapt to various window systems
and a set of protocol converters between the GIR protocol
and individual NIR protocols is depicted in Fig. 14.
The gap between the general window system interface
and the GIR protocol is bridged by a module which
implements the user interface. It can be designed either
as a protocol converter, or in the case of difficulties
when applying the formal methods described, as an
event-response module written in a general programming
language.
Integration of information services that are not based
on a distributed client-server architecture (with a local
lib
user Windows
"server"
general window
system interface
application
lib
shim
curses
lib
character
terminal
shim
lib
server X lib shim
lib
sequences
programmatic interface
Figure
13: Using shim libraries to construct the general
window system interface.
client and a remote server) with a well-defined NIR proto-
col, that have their own remote user interfaces accessible
by a remote login, such as library catalogs and databases,
can be achieved by using a new front end to the old user
interface. This can be a protocol converter that converts
the GIR protocol to sequences of characters that
would be typed as input by a user when communicating
directly with the old user interface, and that performs
a similar conversion in the other direction. Quotation
marks around the "server" for this type of information
service in Fig. 14 indicate that such an information service
may or may not be based on the client-server model.
Although this front end seems to be subtle and vulnerable
to changes of the old user interface, this approach
has already been successfully used (but the front end was
not implemented as a protocol converter). Examples are
SALBIN [8] and IDLE [19].
5.6 OSI Model Correspondence
Fig. 15 illustrates how protocol layering in the proposed
system corresponds to the OSI model. Internet information
services are based on the TCP/IP protocol fam-
user server
server
system 2
system 3
window
system 1
server
window
window server 1
server 2
lib
lib
system 2
window
window
system 3
lib
window
system 1
lib
lib
shim
shim
lib
shim
module
response event-
general window
system interface
GIR protocol
front
interface
old user "server" 3
converter 2
converter 1
Figure
14: A possible structure of a multi-service information retrieval system.
Data link protocols
Network layer protocols
Transport layer protocols
Physical layer protocols
protocol server
Virtual GIR
User interface
Converters
User
GIR protocol
Application layer protocols
Transport
Data link
Physical
Network
Transport
Network
Data link
Physical
Application
Transport interface
Figure
15: OSI model correspondence.
ily, which provides no explicit support for functions of
the session and presentation OSI layers. NIR protocols
Gopher protocol, etc.) are, with respect to the
OSI model, application layer protocols.
Protocol converters in our system use application layer
protocols (NIR protocols) on one side and the GIR protocol
on the other side. They act as a client side implementation
of the application layer and from the client's point
of view they create another higher layer based on the
GIR protocol. The server side of this higher layer is not
actually implemented, it exists as a client side abstraction
only. Being application layer implementation, the
converters communicate with the transport layer, possibly
over a transport interface module which converts an
actual transport layer interface to a form suitable for the
converter's input and output. On the upper side of the
converters, there is the user interface which presents information
services to the user.
connect
query
-g response
closed
close - g closed,
close
query
connected
inf. source
query
error
-g ls
close,
error
response
Figure
Gopher protocol client (left) and server
(right).
6 Example
As an example of applying the approach described in
Chapter 5, we will try to construct a protocol converter
implementing the network communication part of the Gopher
protocol client. We will try to use all three protocol
converter design methods mentioned in Chapter 4 in order
to decide which one is the most suitable and whether
our approach is feasible at all.
The Gopher protocol [2, 3] can be described by two
finite-state machines, one for the client side (G 0 ), and the
other for the server side (G 1 ), communicating via message
passing. Corresponding state-transition diagrams
are shown in Fig. 16.
The notation used corresponds to that described in
Chapter 3. Opening a connection and closing a connection
between the client and the server are modeled by
the exchange of virtual messages g connect , g connected ,
close, and g closed . The loss of a message sent by
the client is represented by sending a virtual g ls message
instead of a "normal" message to the server which
then sends back a virtual g tm (timeout) message as a
response. The loss of a message sent by the server is represented
by sending a virtual g tm message only. If more
than one message may be sent in a certain state, one is
chosen non-deterministically (of course, with the exception
of g response and g error messages, one of them is
sent by the Gopher protocol server based on the result of
a query processed).
Our task is to construct a protocol converter which
allows U 0 (the GIR protocol client) and G 1 (the Gopher
I U 0 I G 0
2,3
- u query
7,85response or
error
- u presenting
error
error
response
Figure
17: Images of client finite-state machines.
I U G
I
- u error
query
error
6,8service
query
Figure
18: Images of server finite-state machines.
protocol server) to communicate.
6.1 Conversion via Projection
When we try to project the finite-state machines U 0 and
onto the common image, some of the best results we
can get in terms of achieved similarity and retained functionality
are the I U0 and I G0 images shown in Fig. 17.
They are close to each other, but they are not exactly
the same. To obtain one common image, the functionality
would have to be further reduced, which is not acceptable.
On the server side, there is a similar problem. We can get
images I U1 and I G1 of U 1 and G 1 shown in Fig. 18. These
are not exactly the same and, moreover, the common
functionality is not sufficient. We would not be able to
translate the g connected and g closed messages sent by
the Gopher protocol server which the converter needs to
receive.
The conclusion is that conversion via projection is only
suitable for protocols that are close enough, which is not
our case.
connected
response
closed
Figure
19: Conversion seed S for U 0 - G 1 converter.
-g ls
response
- u
- u
close
connected
error
closed
error
response
Figure
converter produced from the conversion
seed S.
6.2 Conversion Seed Approach
For the conversion seed approach, G 0 has to be modified
so that it contains only transitions that correspond to
interaction with the peer entity G 1 (interaction with the
user is not included). That is, the transitions from state 4
lead directly to state 7 and the transition from state 8
leads to state 1, which is the starting state.
A simple conversion seed S is shown in Fig. 19. It
defines constraints on the order in which messages may
be received by the converter. Ordering relations between
messages being sent and messages being received will be
implemented in the converter by the algorithm which constructs
the converter as a reduced finite-state machine of
communicating with U 0 and a reduced finite-state
machine of G 0 when communicating with G 1 [5].
The output of the algorithm for U 1 , G 0 , and S is shown
in Fig. 20. In state 8, the converter has to decide whether
to send the g response or the g error message to U 0 based
on the receiving transition that was used to move from
state 5 to state 6. This requires some internal memory
and associated decision mechanism in the converter.
We can conclude that the conversion seed approach is
applicable to our example, but we have to construct a
conversion seed heuristically using our knowledge of the
converter operation.5 6
G 113+ g connect
connected
query
database
query
database
error
response
close
closed
response
query
connect
query - g error
close
close
closed
database
Figure
21: Gopher protocol server for the quotient approach
6.3 The quotient approach
The algorithm based on the solution of the quotient problem
described in [4, 5] uses a rendezvous model (as opposed
to the message-passing model used in the previous
two approaches), in which interaction between two
components occurs synchronously via actions. An action
can take place when both parties are ready for it. State
changes happen simultaneously in both components.
Transmission channels between the converter and other
communicating entities are modeled explicitly as finite-state
machines with internal transitions, which may or
may not happen, representing loss of messages. After
such a loss, a timeout event occurs at the sender end of
the channel.
Because of different modeling of message losses in the
quotient approach, the state-transition diagram for the
Gopher protocol server has to be slightly modified, as
shown in Fig. 21. Virtual g ls and g tm messages are
removed and new receive transitions are added to cope
with duplicate messages sent by the converter.
In our case, the converter is collocated with U 0 , meaning
there are no losses in U 0 - C communication. We
only have to model the C - G 1 channel (shown in
Fig. 23), thereby obtaining the configuration shown in
Fig. 22. The composition of U 0 , CG 1 chan, and G 1 forms
the B part in the quotient problem (Fig. 6). The service
specification A is shown in Fig. 24.
After applying the algorithm on these inputs we get a
converter which has 194 states and 590 transitions, too
many to be presented here. Some of the states and their
user
Figure
22: Quotient approach configuration.
close close
query
response
error
closed
closed
error
connected
response
connect
Figure
++ u end session
database
database
- u presenting
response or
error
response
query
database
error
++ u new++ u new inf.
++ u end session
++ u end session
query
service &
source
source
service &
++ u inf.
++ u query
Figure
24: Service specification.
associated transitions represent alternative sequences in
which messages may be sent by the converter. For ex-
ample, g close message (request for the Gopher protocol
server to close the connection) may be sent by the converter
before sending u response message (response for
the GIR protocol client) or after it. These alternatives
are redundant with respect to the function of the con-
verter. Unfortunately, it seems to be difficult to remove
them in an automatic manner.
A more important problem is that some other states
and transitions represent sequences which are not acceptable
because the converter has not enough knowledge to
form the content of a message to be sent. For exam-
ple, g connect message (request for the Gopher protocol
server to connect to it) can be sent only when the converter
knows where to try to connect, that is after u inf.
service and source message has been received, not before
it. Transitions leading to such unacceptable sequences
have to be eliminated by defining semantic relations between
messages and enforcing them in run-time. In our
example, two semantic relations are sufficient:
service and source connect
query
The symbol ) means that a message on the right side
can be sent or received after a message on the left side
has been sent or received, as indicated by the plus and
minus signs.
Another problem with the quotient seeking algorithm
is its computing complexity. Let SX be the set of states
of an entity X and let jS X j be the number of states in
this set. Then the state set of the quotient can grow
exponentially with the upper bound of 2 jS A j\LambdajS B j states.
In our configuration, the maximum number of states of
B is equal to the product of the numbers of states of U 0 ,
Thus:
In practice, however, the algorithm does not always use
exponential time and space. In our example, the maximum
number of states of C during computation was 330.
We can conclude that the quotient approach, when supplemented
with the definition of semantic relations between
messages, is applicable to our example. It is more
compute-intensive than the conversion seed approach but
it is more systematic, and no heuristic constructions are
required.
In our example, we have shown that even the quotient
approach, which is the most systematic of the currently
known formal methods of protocol converter design, is
not sufficient for our approach on its own. It has to be
complemented with the definition of semantic relations
between messages.
In addition to this, there are several problems with
using protocol converters in the proposed way which we
have not yet consider: GIR protocol universality, message
contents transformation, and covering details of network
protocols.
GIR protocol universality The GIR protocol suggested
in Fig. 11 is too simple. It contains only two
methods: set source and get object. A GIR protocol for
use in real designs has to incorporate at least the following
methods: get object metainformation, modify object,
create new object, remove object, and search object.
Message content transformation When a converter
should send message X in response to receiving message
Y, it may have to build the content of message X based
on the content of message Y. We call this process the
message content transformation. In our example we need
to provide for the transformation of the content of the
following messages:
service and source ) g connect
query
response ) u response
error ) u error
There are several ways to formalize message content
transformation. We briefly outline four possible approaches
translation grammars We can conceive the set of all
possible contents of messages on each side of the converter
as a language, and individual message contents
as sentences of this language. We can then
formally describe the languages with two grammars
(one for each side of the converter) and the transformation
of sentences with two translation grammars
(one for each direction).
sequential rewriting system We can define a set of
rewriting rules that specify how to get the content
of an outgoing message beginning with the content
of the corresponding incoming message. These rules
would be applied sequentially on the message content
using string matching in a similar manner as rules
for mail header and envelope processing work in the
sendmail program [21].
SGML link process The set of all possible contents of
messages can be modeled using two SGML-based
markup languages, one for each side of the converter.
The transformation between them can then be performed
as a pair of SGML link processes [10] (one for
each direction) or as a pair of SGML tree transformation
processes (STTP) defined within DSSSL [7]
specifications for both languages.
predicate-based rewriting system We can define a
set of facts and rules in Prolog or in a similar logical
language that specify relations between pieces of
information in an incoming message and the corresponding
outgoing message. We then begin with a
framework of the outgoing message in the form of
a term composed of unbound variables. When we
apply the defined set of predicates using a rewriting
system to this term, its variables become step by
step bound to the values from the incoming message
content.
Further research will be required to find whether and
under what conditions the proposed techniques could be
used for message content transformation in our approach.
Covering details of network protocols Another
problem concerns covering details of network communication
protocols such as port numbers, parallel connec-
tions, and various options and parameters. It seems to
be practicable to incorporate them into the protocol on
the bottom side of the protocol converter (see Fig. 15) as
variables in the content of exchanged messages or even as
additional virtual messages recognized by the transport
interface used.
8 Conclusion
We have shown that formal methods of protocol converter
design could under certain circumstances be used to construct
the part of an NIR system client that deals with
network communication. However, these methods are not
sufficient on their own and have to be supplemented with
other techniques for our approach to become practicable.
In the discussion we have mentioned the most important
problems and proposed possible solutions or directions
for further research work.
If these problems are resolved it will be possible to
design many new specialized NIR services with their own
custom-tailored protocols since implementation of these
protocols can be done easily and reliably. New protocols
could also be supported in a different way. Some new
NIR services (e.g., HotJava [11]) intend to support new
protocols by retrieving the code to implement them by
a client from a server. Instead of retrieving the code,
only the formal specification could be retrieved and the
protocol would be implemented by the converter from the
specification.
--R
"A note on reliable full-duplex transmission over half-duplex lines,"
The Internet Gopher protocol
"Deriving a Protocol Converter: a Top-Down Method,"
"Formal methods for protocol conversion,"
"The OSI reference model,"
Information technology - Text and office systems - Document Style Semantics and Specification Language (DSSSL)
"Some network tools for Internet gateway service,"
"Intelligent information retrieval: An introduction,"
The SGML Handbook
The HotJava Browser: A White Paper
"Modeling and optimization of hierarchical synchronous cir- cuits,"
"Protocol conversion,"
"Natural Language Processing for Information Retrieval,"
"A virtual protocol model for computer-human interaction,"
"A formal protocol conversion method,"
"Lessons learned from SUIT, the Simple User Interface Toolkit,"
"IDLE: Unified W3-access to interactive information servers,"
Designing the User Interface: Strategies for Effective Human-Computer interac- tion
"Layered protocols for computer-human dialogue. I: Principles,"
"Layered protocols for computer-human dialogue. II: Some practical issues,"
"Hypermedia and Cognition: Designing for Com- prehension,"
--TR | network information services;network information retrieval;protocol conversion |
263927 | Scale-sensitive dimensions, uniform convergence, and learnability. | Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper, we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gine, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a Gine, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or agnostic) framework. Furthermore, we find a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class. | Introduction
In typical learning problems, the learner is presented with a finite sample of data generated by an
unknown source and has to find, within a given class, the model yielding best predictions on future
data generated by the same source. In a realistic scenario, the information provided by the sample
is incomplete, and therefore the learner might settle for approximating the actual best model in
the class within some given accuracy. If the data source is probabilistic and the hypothesis class
consists of functions, a sample size sufficient for a given accuracy has been shown to be dependent on
different combinatorial notions of "dimension", each measuring, in a certain sense, the complexity
of the learner's hypothesis class.
Whenever the learner is allowed a low degree of accuracy, the complexity of the hypothesis class
might be measured on a coarse "scale" since, in this case, we do not need the full power of the entire
set of models. This position can be related to Rissanen's MDL principle [17], Vapnik's structural
minimization method [22], and Guyon et al.'s notion of effective dimension [11]. Intuitively, the
"dimension" of a class of functions decreases as the coarseness of the scale at which it is measured
increases. Thus, by measuring the complexity at the right "scale" (i.e., proportional to the accuracy)
the sample size sufficient for finding the best model within the given accuracy might dramatically
shrink.
As an example of this philosophy, consider the following scenario. 1 Suppose a meteorologist
is requested to compute a daily prediction of the next day's temperature. His forecast is based
on a set of presumably relevant data, such as the temperature, barometric pressure, and relative
humidity over the past few days. On some special events, such as the day before launching a Space
Shuttle, his prediction should have a high degree of accuracy, and therefore he analyzes a larger
amount of data to finely tune the parameters of his favorite mathematical meteorological model.
On regular days, a smaller precision is tolerated, and thus he can afford to tune the parameters of
the model on a coarser scale, saving data and computational resources.
In this paper we demonstrate quantitatively how the accuracy parameter plays a crucial role in
determining the effective complexity of the learner's hypothesis class. 2
We work within the decision-theoretic extension of the PAC framework, introduced in [12]
and also known as agnostic learning. In this model, a finite sample of pairs (x; y) is obtained
through independent draws from a fixed distribution P over X \Theta [0; 1]. The goal of the learner
is to be able to estimate the conditional expectation of y given x. This quantity is defined by a
called the regression function in statistics. The learner is given a class H
of candidate regression functions, which may or may not include the true regression function f .
This class H is called ffl-learnable if there is a learner with the property that for any distribution
P and corresponding regression function f , given a large enough random sample from P , this
learner can find an ffl-close approximation 3 to f within the class H, or if f is not in H, an ffl-close
approximation to a function in H that best approximates f . (This analysis of learnability is purely
information-theoretic, and does not take into account computational complexity.) Throughout the
1 Adapted from [14].
Our philosophy can be compared to the approach studied in [13], where the range of the functions in the hypothesis
class is discretized in a number of elements proportional to the accuracy. In this case, one is interested in bounding
the complexity of the discretized class through the dimension of the original class. Part of our results builds on this
discretization technique.
3 All notions of approximation are with respect to mean square error.
paper, we assume that H (and later F) satisfies some mild measurability conditions. A suitable
such condition is the "image admissible Suslin" property (see [8, Section 10.3.1, page 101].)
The special case where the distribution P is taken over X \Theta f0; 1g was studied in [14] by Kearns
and Schapire, who called this setting probabilistic concept learning. If we further demand that the
functions in H take only values in f0; 1g, it turns out that this reduces to one of the standard
PAC learning frameworks for learning deterministic concepts. In this case it is well known that the
learnability of H is completely characterized by the finiteness of a simple combinatorial quantity
known as the Vapnik-Chervonenkis (VC) dimension of H [24, 6]. An analogous combinatorial
quantity for the probabilistic concept case was introduced by Kearns and Schapire. We call this
quantity the P fl -dimension of H, where fl ? 0 is a parameter that measures the "scale" to which the
dimension of the class H is measured. They were only able to show that finiteness of this parameter
was necessary for probabilistic concept learning, leaving the converse open. We solve this problem
showing that this condition is also sufficient for learning in the harder agnostic model.
This last result has been recently complemented by Bartlett, Long, and Williamson [4], who
have shown that the P fl -dimension characterizes agnostic learnability with respect to the mean
absolute error. In [20], Simon has independently proven a partial characterization of (nonagnostic)
learnability using a slightly different notion of dimension.
As in the pioneering work of Vapnik and Chervonenkis [24], our analysis of learnability begins
by establishing appropriate uniform laws of large numbers. In our main theorem, we establish the
first combinatorial characterization of those classes of random variables whose means uniformly
converge to their expectations for all distributions. Such classes of random variables have been
called Glivenko-Cantelli classes in the empirical processes literature [9]. Given the usefulness of
related uniform convergence results in combinatorics and randomized algorithms, we feel that this
result may have many applications beyond those we give here. In addition, our results rely on
a combinatorial result that generalizes Sauer's Lemma [18, 19]. This new lemma considerably
extends some previously known results concerning f0; 1; \Lambdag tournament codes [21, 7]. As other
related variants of Sauer's Lemma were proven useful in different areas, such as geometry and
Banach space theory (see, e.g., [15, 1]), we also have hope to apply this result further.
The uniform, distribution-free convergence of empirical means to true expectations for classes of
real-valued functions has been studied by Dudley, Gin'e, Pollard, Talagrand, Vapnik, Zinn, and
others in the area of empirical processes. These results go under the general name of uniform laws
of large numbers. We give a new combinatorial characterization of this phenomenon using methods
related to those pioneered by Vapnik and Chervonenkis.
Let F be a class of functions from a set X into [0; 1]. (All the results presented in this section
can be generalized to classes of functions taking values in any bounded real range.) Let P denote
a probability distribution over X such that f is P -measurable for all f 2 F . By P (f) we denote
the P-mean of f , i.e., its integral w.r.t. P . By P n (f) we denote the random variable 1
are drawn independently at random according to P .
Following Dudley, Gin'e and Zinn [9], we say that F is an ffl-uniform Glivenko-Cantelli class if
lim
sup
Pr
sup
m-n
sup
0: (1)
Here Pr denotes the probability with respect to the points x 1 drawn independently at
random according to P . 4 The supremum is understood with respect to all distributions P over X
(with respect to some suitable oe-algebra of subsets of X ; see [9]).
We say that F satisfies a distribution-free uniform strong law of large numbers, or more briefly,
that F is a uniform Glivenko-Cantelli class, if F is an ffl-uniform Glivenko-Cantelli class for all
We now recall the notion of VC-dimension, which characterizes uniform Glivenko-Cantelli classes
of f0; 1g-valued functions.
Let F be a class of f0; 1g-valued functions on some domain set, X . We say F C-shatters
a set A ' X if, for every E ' A, there exists some f E 2 F satisfying: For every x 2 A n E,
1. Let the V C-dimension of F , denoted V C-dim(F ),
be the maximal cardinality of a set A ' X that is V C-shattered by F . (If F V C-shatters sets of
unbounded finite sizes, then let V
The following was established by Vapnik and Chervonenkis [24] for the "if " part and (in a
stronger version) by Assouad and Dudley [2] (see [9, proposition 11, page 504].)
Theorem 2.1 Let F be a class of functions from X into f0; 1g. Then F is a uniform Glivenko-
Cantelli class if and only if V C-dim(F) is finite.
Several generalizations of the V C-dimension to classes of real-valued functions have been previously
proposed: Let F be a class of [0; 1]-valued functions on some domain set X .
ffl (Pollard [16], see also [12]): We say F P -shatters a set A ' X if there exists a function
R such that, for every E ' A, there exists some f E 2 F satisfying: For every
Let the P-dimension (denoted by P-dim) be the maximal cardinality of a set A ' X that is
-shattered by F . (If F P -shatters sets of unbounded finite sizes, then let
-shatters a set A ' X if there exists a constant ff 2 R such that,
for every E ' A, there exists some f E 2 F satisfying: For every x 2 A n E, f
for every x 2
Let the V -dimension (denoted by V -dim) be the maximal cardinality of a set A ' X that is
-shattered by F . (If F V -shatters sets of unbounded finite sizes, then let V
It is easily verified (see below) that the finiteness of neither of these combinatorial quantities
provides a characterization of uniform Glivenko-Cantelli classes (more precisely, they both provide
only a sufficient condition.)
Kearns and Schapire [14] introduced the following parametrized variant of the P-dimension. Let
F be a class of [0; 1]-valued functions on some domain set X and let fl be a positive real number.
We say F -shatters a set A ' X if there exists a function s : A ! [0; 1] such that for every
Actually Dudley et al. use outer measure here, to avoid some measurability problems in certain cases.
there exists some f E 2 F satisfying: For every x 2 A n E, f E and, for every
Let the P fl -dimension of F , denoted P fl -dim(F ), be the maximal cardinality of a set A ' X that
is P fl -shattered by F . (If F P fl -shatters sets of unbounded finite sizes, then let P
A parametrized version of the V -dimension, which we'll call V fl -dimension, can be defined in
the same way we defined the P fl -dimension from the P-dimension. The first lemma below follows
directly from the definitions. The second lemma is proven through the pigeonhole principle.
Lemma 2.1 For any F and any fl ? 0, P
Lemma 2.2 For any class F of [0; 1]-valued functions and for all fl ? 0,
The P fl and the V fl dimensions have the advantage of being sensitive to the scale at which differences
in function values are considered significant.
Our main result of this section is the following new characterization of uniform Glivenko-Cantelli
classes, which exploits the scale-sensitive quality of the P fl and the V fl dimensions.
Theorem 2.2 Let F be a class of functions from X into [0; 1].
1. There exist constants a; b ? 0 (independent of F) such that for any fl ? 0
(a) If P fl -dim(F) is finite, then F is an (afl)-uniform Glivenko-Cantelli class.
(b) If V fl -dim(F) is finite, then F is a (bfl)-uniform Glivenko-Cantelli class.
(c) If P fl -dim(F) is infinite, then F is not a (fl \Gamma -uniform Glivenko-Cantelli class for any
(d) If V fl -dim(F) is infinite, then F is not a (2fl \Gamma -uniform Glivenko-Cantelli class for
any - ? 0.
2. The following are equivalent:
(a) F is a uniform Glivenko-Cantelli class.
(b) P fl -dim(F) is finite for all fl ? 0.
(c) V fl -dim(F) is finite for all fl ? 0.
(In the proof we actually show that a - 24 and b - 48, however these values are likely to be
improved through a more careful analysis.)
The proof of this theorem is deferred to the next section. Note however that part 1 trivially
implies part 2.
The following simple example (a special case of [9, Example 4, page 508], adapted to our pur-
poses) shows that the finiteness of neither P-dim nor V -dim yields a characterization of Glivenko-
Cantelli classes. (Throughout the paper we use ln to denote the natural logarithm and log to denote
the logarithm in base 2.)
Example 2.1 Let F be the class of all [0; 1]-valued functions f defined on the positive integers
and such that f(x) - e \Gammax for all x 2 N and all f 2 F . Observe that, for all
. Therefore, F is a uniform Glivenko-Cantelli class by Theorem 2.2. On the
other hand, it is not hard to show that the P-dimension and the V -dimension of F are both infinite.
Theorem 2.2 provides the first characterization of Glivenko-Cantelli classes in terms of a simple
combinatorial quantity generalizing the Vapnik-Chervonenkis dimension to real-valued functions.
Our results extend previous work by Dudley, Gin'e, and Zinn, where an equivalent characterization
is shown to depend on the asymptotic properties of the metric entropy. Before stating the metric-
entropy characterization of Glivenko-Cantelli classes we recall some basic notions from the theory
of metric spaces.
Let (X; d) be a (pseudo) metric space, let A be a subset of X and ffl ? 0.
ffl A set B ' A is an ffl-cover for A if, for every a 2 A, there exists some b 2 B such that
ffl. The ffl-covering number of A, N d (ffl; A), is the minimal cardinality of an ffl-cover for
A (if there is no such finite cover then it is defined to be 1).
ffl A set A ' X is ffl-separated if, for any distinct a; b 2 A, ffl. The ffl-packing number of
A, M d (ffl; A), is the maximal size of an ffl-separated subset of A.
The following is a simple, well-known fact.
Lemma 2.3 For every (pseudo) metric space (X; d), every A ' X, and ffl ? 0
For a sequence of n points x class F of real-valued functions defined on
xn (f; g) denote the l 1 distance between f; g 2 F on the points x n , that is
l 1
xn
1-i-n
As we will often use the l 1
xn distance, let us introduce the notation N (ffl; F ; x n ) and M(ffl; F ; x n )
to stand for, respectively, the ffl-covering and the ffl-packing number of F with respect to l 1
xn .
A notion of metric entropy H n , defined by
log N (ffl; F ; x n );
has been used by Dudley, Gin'e and Zinn to prove the following.
Theorem 2.3 ([9, Theorem 6, page 500]) Let F be a class of functions from X into [0; 1].
Then
1. F is a uniform Glivenko-Cantelli class if and only if lim n!1 H n (ffl;
2. For all ffl ? 0, if lim n!1 H n (ffl; is an (8ffl)-uniform Glivenko-Cantelli class.
The results by Dudley et al. also give similar characterizations using l p norms in place of the l 1
norm.
Related results were proved earlier by Vapnik and Chervonenkis [24, 25]. In particular, they
proved an analogue of Theorem 2.3, where the convergence of means to expectations is characterized
for a single distribution P . Their characterization is based on H n (ffl; F) averaged with respect to
samples drawn from P .
3 Proof of the main theorem
We wish to obtain a characterization of uniform Glivenko-Cantelli classes in terms of their P
dimension. By using standard techniques, we just need to bound the fl-packing numbers of sets of
real-valued functions by an appropriate function of their P cfl -dimension, for some positive constant
c. Our line of attack is to reduce the problem to an analogous problem in the realm of finite-
valued functions. Classes of functions into a discrete and finite range can then be analyzed using
combinatorial tools.
We shall first introduce the discrete counterparts of the definitions above. Our next step will
be to show how the real-valued problem can be reduced to a combinatorial problem. The final, and
most technical part of our proof, will be the analysis of the combinatorial problem through a new
generalization of Sauer's Lemma.
Let X be any set and let bg. We consider classes F of functions f from X to B.
Two such functions f and g are separated if they are 2-separated in the l 1 metric, i.e., if there
exists some x 2 X such that 2. The class F is pairwise separated if f and g are
separated for all f 6= g in F .
F strongly shatters a set A ' X if A is nonempty and there exists a function s
that, for every E ' A, there exists some f E 2 F satisfying: For every x 2 A n E, f E
and, for every x 2 E, f E (x) - s(x)+ 1. If s is any function witnessing the shattering of A by F , we
shall also say that F strongly shatters A according to s. Let the strong dimension of F , S-dim(F ),
be the maximal cardinality of a set A ' X that is strongly shattered by F . (If F strongly shatters
sets of unbounded finite size, then let
For a function f and a real number ae ? 0, the ae-discretization of f , denoted
by f ae , is the function f ae (x) def
ae c, i.e. f ae f(x)g. For a class F of
nonnegative real-valued functions let
Fg.
We need the following lemma.
Lemma 3.1 For any class F of [0; 1]-valued functions on a set X and for any ae ? 0,
1. for every
2. for every ffl - 2ae and every x
Proof. To prove part 1 we show that any set strongly shattered by F ae is also P ae=2 -shattered by
F . If A ' X is strongly shattered by F ae , then there exists a function s such that for every E ' A
there exists some f (E) 2 F satisfying: for every x 2 A n E, f ae
and for every x 2 E,
Assume first f ae
holds and, by definition of f ae
we have f (E)
by definition of f ae
, we have f (E) (x) - aef ae
(x), which implies
f (E) (x) - ae \Delta s(x)+ae. Thus A is P ae=2 -shattered by F , as can be seen using the function s
defined by s
To prove part 2 of the lemma it is enough to observe that, by the definition of F ae , for all
f; 2. 2
We now prove our main combinatorial result which gives a new generalization of Sauer's Lemma.
Our result extends some previous work concerning f0; 1; \Lambdag tournament codes, proven in a completely
different way (see [21, 7]).
The lemma concerns the l 1 packing numbers of classes of functions into a finite range. It
shows that, if such a class has a finite strong dimension, then its 2-packing number is bounded
by a subexponential function of the cardinality of its domain. For simplicity, we arbitrarily fix a
sequence x n of n points in X and consider only the restriction of F to this domain, dropping the
subscript x n from our notation.
Lemma 3.2 If F is a class of functions from a finite domain X of cardinality n to a finite range,
Note that for fixed d the bound in Lemma 3.2 is n O(log n) even if b is not a constant but a polynomial
in n.
Proof of Lemma 3.2. Fix b - 3 (the case b ! 3 is trivial.) Let us say that a class F as
above strongly shatters a pair (A; s) (for a nonempty subset A of X and a function s
if F strongly shatters A according to s. For all integers h - 2 and n - 1, let t(h; n) denote the
maximum number t such that for every set F of h pairwise separated functions f from X to B, F
strongly shatters at least t pairs (A; s) where A ' X , A 6= ;, and s : A ! B. If no such F exists,
then t(h; n) is infinite.
Note that the number of possible pairs (A; s) for which the cardinality of A does not exceed
(as for A of size i ? 0 there are strictly less than b i possibilities to
choose s.) It follows that, if t(h; n) - y for some h, then M l 1(2; F) ! h for all sets F of functions
from X to B and such that S-dim(F) - d. Therefore, to finish the proof, it suffices to show that
We claim that t(2;
2. The first part of the claim is readily verified. For the second part, first note that if no
set of 2mnb 2 pairwise separated functions from X to B exists, then t(2mnb
the claim holds. Assume then that there is a set F of 2mnb 2 pairwise separated functions from
X to B. Split it arbitrarily into mnb 2 pairs. For each pair (f; g) find a coordinate x 2 X where
1. By the pigeonhole principle, the same coordinate x is picked for at least mb 2
pairs. Again by the pigeonhole principle, there are at least mb
? 2m of these pairs (f; g) for
which the (unordered) set ff(x); g(x)g is the same. This means that there are two sub-classes of
F , call them F 1 and F 2 , and there are x 2 X and so that for each
for each g 2 F 2 Obviously, the members of
F 1 are pairwise separated on X n fxg and the same holds for the members of F 2 . Hence, by the
definition of the function t, F 1 strongly shatters at least t(2m;
and the same holds for F 2 . Clearly F strongly shatters all pairs strongly shattered by F 1 or F 2 .
Moreover, if the same pair (A; s) is strongly shattered both by F 1 and by F 2 , then F also strongly
shatters the pair It follows that
establishing the claim.
Now suppose n ? r - 1. Let repeated application
of the above claim, it follows that t(h; n) - 2 r . Since t is clearly monotone in its first argument,
and h, this implies t(2(nb 2
. However, since the total number of functions from
to B is b n , there are no sets of pairwise separated functions of size larger than this, and hence
y in this case. On the other hand, when the
result above yields t(2(nb 2 y. Thus in either case y,
completing the proof. 2
Before proving Theorem 2.2, we need two more lemmas. The first one is a straightforward adaptation
of [22, Section A.6, p. 223].
Lemma 3.3 Let F be a class of functions from X into [0; 1] and let P be a distribution over X.
Then, for all ffl ? 0 and all n - 2=ffl 2 ,
Pr
sup
\Theta N (ffl=6; F ; x 0
where Pr denotes the probability w.r.t. the sample x drawn independently at random according
to P , and E the expectation w.r.t. a second sample x 0
2n also drawn independently
at random according to P .
Proof. A well-known result (see e.g. [8, Lemma 11.1.5] or [10, Lemma 2.5]) shows that, for all
Pr
sup
sup
ffl)
where
We combine this with a result by Vapnik [22, pp. 225-228] showing that for all ffl ? 0
Pr
sup
\Theta N (ffl=3; F ; x 0
This concludes the proof. 2
The next result applies Lemma 3.2 to bound the expected covering number of a class F in terms
of P fl -dim(F ).
Lemma 3.4 Let F be a class of functions from X into [0; 1] and P a distribution over X. Choose
where the expectation E is taken w.r.t. a sample x drawn independently at random according
to P .
Proof. By Lemma 2.3, Lemmas 3.1 and 3.2, and Stirling's approximation,
xn
xn
xn
We are now ready to prove our characterization of uniform Glivenko-Cantelli classes.
Proof of Theorem 2.2. We begin with part 1.d: If V fl
show that F is not a (2fl \Gamma -uniform Glivenko-Cantelli class for any - ? 0. To see this, assume
1. For any sample size n and any d ? n, find in X a set S of d points that are
-shattered by F . Then there exists ff ? 0 such that for every E ' S there exists some f E 2 F
satisfying: For every x 2 A n E, f E
the uniform distribution on S. For any sample x there is a function f 2 F
such that f(x i g. Thus, for any
large enough we can find some f 2 F such that jP This
proves part 1.d. Part 1.c follows from Lemma 2.2.
To prove part 1.a we use inequality (2) from Lemma 3.3. Then, to bound the expected covering
number we apply Lemma 3.4. This shows that
lim
sup
Pr
sup
for some a ? 0 whenever P fl -dim(F) is finite.
Equation (4) shows that P n (f) ! P (f) in probability for all f 2 F and all distributions P .
Furthermore, as Lemma 3.3 and Lemma 3.4 imply that
1, one may apply the Borel-Cantelli lemma and strengthen (4) to almost sure convergence, i.e.
lim
sup
Pr
sup
m-n
sup
0:
This completes the proof of part 1.a. The proof of part 1.b follows immediately from Lemma 2.2.The proof of Theorem 2.2, in addition to being simpler than the proof in [9] (see Theorem 2.3
in this paper), also provides new insights into the behaviour of the metric entropy used in that
characterization. It shows that there is a large gap in the growth rate of the metric entropy
either F is a uniform Glivenko-Cantelli class, and hence, by (3) and by definition of H n ,
for or F is not a uniform Glivenko-Cantelli class, and hence there
exists ffl ? 0 such that P ffl which is easily seen to imply that H n (ffl; n). It is
unknown if log 2 n can be replaced by log ff n where 1 - ff ! 2.
From the proof of Theorem 2.2 we can obtain bounds on the sample size sufficient to guarantee
that, with high probability, in a class of [0; 1]-valued random variables each mean is close to its
expectation.
Theorem 3.1 Let F be a class of functions from X into [0; 1]. Then for all distributions P over
X and all ffl;
Pr
sup
for
where d is the P ffl=24 -dimension of F .
Theorem 3.1 is proven by applying Lemma 3.3 and Lemma 3.4 along with standard approximations.
We omit the proof of this theorem and mention instead that an improved sample size bound has
been shown by Bartlett and Long [3, Equation (5), Theorem 9]. In particular, they show that if
the P (1=4\Gamma- )ffl -dimension d 0 of F is finite for some - ? 0, then a sample size of order
O
is sufficient for (5) to hold.
4 Applications to Learning
In this section we define the notion of learnability up to accuracy ffl, or ffl-learnability, of statistical
regression functions. In this model, originally introduced in [12] and also known as "agnostic
learning", the learning task is to approximate the regression function of an unknown distribution.
The probabilistic concept learning of Kearns and Schapire [14] and the real-valued function learning
with noise investigated by Bartlett, Long, and Williamson [4] are special cases of this framework.
We show that a class of functions is ffl-learnable whenever its P affl -dimension is finite for some
constant a ? 0. Moreover, combining this result with those of Kearns and Schapire, who show
that a similar condition is necessary for the weaker probabilistic concept learning, we can conclude
that the finiteness of the P fl -dimension for all fl ? 0 characterizes learnability in the probabilistic
concept framework. This solves an open problem from [14].
Let us begin by briefly introducing our learning model. The model examines learning problems
involving statistical regression on [0; 1]-valued data. Assume X is an arbitrary set (as above), and
be an unknown distribution on Z. Let X and Y be random
variables respectively distributed according to the marginal of P on X and Y . The regression
function f for distribution P is defined, for all x 2 X , by
The general goal of regression is to approximate f in the mean square sense (i.e. in L 2 -norm) when
the distribution P is unknown, but we are given z
independently generated from the distribution P .
In general we cannot hope to approximate the regression function f for an arbitrary distribution
. Therefore we choose a hypothesis space H, which is a family of mappings
settle for a function in H that is close to the best approximation to f in the hypothesis space
H. To this end, for each hypothesis h 2 H, let the function defined by:
is the mean square loss of h. The
goal of learning in the present context is to find a function b h 2 H such that
for some given accuracy ffl ? 0. It is easily verified that if inf h2H P (' h ) is achieved by some h 2 H,
then h is the function in H closest to the true regression function f in the L 2 norm.
A learning procedure is a mapping A from finite sequences in Z to H. A learning procedure
produces a hypothesis b training sample z n . For given accuracy parameter ffl, we
say that H is ffl-learnable if there exists a learning procedure A such that
lim
sup
Pr
ae
oe
0: (7)
Here Pr denotes the probability with respect to the random sample z n 2 Z n , each z i drawn
independently according to P , and the supremum is over all distributions P defined on a suitable
oe-algebra of subsets of Z. Thus H is ffl-learnable if, given a large enough training sample, we can
reliably find a hypothesis b h 2 H with mean square error close to that of the best hypothesis in H.
Finally, we say H is learnable if and only if it is ffl-learnable for all ffl ? 0.
1g the above definitions of learnability yield the probabilistic concept learning
model. In this case, if (7) holds for some ffl ? 0 and some class H, we say that H is ffl-learnable in
the p-concept model.
We now state and prove the main results of this section. We start by establishing sufficient
conditions for ffl-learnability and learnability in terms of the P fl -dimension.
Theorem 4.1 There exist constants a; b ? 0 such that for any fl ? 0:
1. If P fl -dim(H) is finite, then H is (afl)-learnable.
2. If V fl -dim(H) is finite, then H is (bfl)-learnable.
3. If P fl -dim(H) is finite for all fl ? 0 or V fl -dim(H) is finite for all fl ? 0, then H is learnable.
We then prove the following, which characterizes p-concept learnability.
Theorem 4.2
1. If P fl -dim(H) is infinite, then H is not (fl 2 =8 \Gamma -learnable in the p-concept model for any
2. If V fl -dim(H) is infinite, then H is not (fl 2 =2 \Gamma -learnable in the p-concept model for any
3. The following are equivalent:
(a) H is learnable in the p-concept model.
(b) P fl -dim(H) is finite for all fl ? 0.
(c) V fl -dim(H) is finite for all fl ? 0.
(d) H is a uniform Glivenko-Cantelli class.
Proof of Theorem 4.1. It is clear that part 3 follows from part 1 using Theorem 2.2. Also,
by Lemma 2.2, part 1 is equivalent to part 2. Thus, to prove Theorem 4.1 it suffices to establish
part 1. We do so via the next two lemmas.
Hg.
Lemma 4.1 If ' H is an ffl-uniform Glivenko-Cantelli class, then H is (3ffl)-learnable.
Proof. The proof uses the method of empirical risk minimization, analyzed by Vapnik [22]. As
the empirical loss on the given sample z that is
A learning procedure, A
ffl , ffl-minimizes the empirical risk if A
ffl (z n ) is any b
us show that any such procedure is guaranteed to 3ffl-learn H.
Fix any n 2 N. If
for all h 2 H, then
and thus P (' A ffl (zn Hence, since we chose n and ffl arbitrarily,
lim
sup
Pr
sup
m-n
sup
implies
lim
sup
Pr
ae
oe
0:The following lemma shows that bounds on the covering numbers of a family of functions H can be
applied to the induced family of loss functions ' H . We formulate the lemma in terms of the square
loss but it may be readily generalized to other loss functions. A similar result was independently
proven by Bartlett, Long, and Williamson in [4] for the absolute loss L(x; (and with
respect to the l 1 metric rather than the l 1 metric used here).
Lemma 4.2 For all ffl ? 0, all H, and any z
Proof. It suffices to show that, for any f; g 2 H and any 1 -
then This follows by noting that, for every s; t; w 2 [0; 1],
We end the proof of Theorem 4.1 by proving part 1. By Lemma 4.1, it suffices to show that ' H
is (afl)-uniform Glivenko-Cantelli for some a ? 0. To do so we use (2) from Lemma 3.3. Then,
to bound the expected covering number, we apply first Lemma 4.2 and then Lemma 3.4. This
establishes
lim
sup
Pr
sup
for some a ? 0 whenever P fl -dim(H) is finite. An application of the Borel-Cantelli lemma to get
almost sure convergence yields the proof. 2
We conclude this section by proving our characterization of p-concept learnability.
Proof of Theorem 4.2. As ffl-learnability implies ffl-learnability in the p-concept model, we have
that part 3 follows from part 1, part 2, and from Theorem 4.1 using Theorem 2.2.
The proof of part 2 uses arguments similar to those used to prove part 1.d of Theorem 2.2.
Finally note that part 1 follows from part 2 by Lemma 2.2 (we remark that a more restricted
version of part 1 was proven in Theorem 11 of [14].) 2
5 Conclusions and open problems
In this work we have shown a characterization of uniform Glivenko-Cantelli classes based on a
combinatorial notion generalizing the Vapnik-Chervonenkis dimension. This result has been applied
to show that the same notion of dimension provides the weakest combinatorial condition known to
imply agnostic learnability and, furthermore, characterizes learnability in the model of probabilistic
concepts under the square loss. Our analysis demonstrates how the accuracy parameter in learning
plays a central role in determining the effective dimension of the learner's hypothesis class.
An open problem is what other notions of dimension may characterize uniform Glivenko-Cantelli
classes. In fact, for classes of functions with finite range, the same characterization is achieved by
each member of a family of several notions of dimension (see [5]).
A second open problem is the asymptotic behaviour of the metric entropy: we have already
shown that for all ffl ? 0, H n (ffl; is a uniform Glivenko-Cantelli class and
We conjecture that for all ffl ? 0, H n (ffl;
is a uniform Glivenko-Cantelli class. A positive solution of this conjecture would also affect the
sample complexity bound (6) of Bartlett and Long. In fact, suppose that Lemma 3.4 is improved by
showing that sup xn M(ffl; F ; x n
\Delta cd for some positive constant c and for
(note that this implies our conjecture.) Then, combining this with [3, Lemma 10-11], we can easily
show a sample complexity bound of
O
for any 0 ! - ! 1=8 for which is finite. It is not clear how to bring the
constant 1=8 down to 1=4 as in (6), which was proven using l 1 packing numbers.
Acknowledgments
We would like to thank Michael Kearns, Yoav Freund, Ron Aharoni and Ron Holzman for fruitful
discussions, and Alon Itai for useful comments concerning the presentation of the results.
Thanks also to an anonymous referee for the many valuable comments, suggestions, and references
--R
Embedding of
Minimax nonparametric estimation over classes of sets.
More theorems about scale-sensitive dimensions and learning
Characterizations of learnability for classes of f0
Learnability and the Vapnik-Chervonenkis dimension
A lower bound for f0
A course on empirical processes.
Uniform and universal Glivenko-Cantelli classes
Some limit theorems for empirical processes.
Structural risk minimization for character recognition.
Decision theoretic generalizations of the PAC model for neural net and other learning applications.
A generalization of Sauer's lemma.
Efficient distribution-free learning of probabilistic concepts
Some remarks about embedding of
Empirical Processes
Modeling by shortest data description.
On the density of families of sets.
A combinatorial problem: Stability and order for models and theories in infinitary languages.
Bounds on the number of examples needed for learning functions.
Estimation of Dependences Based on Empirical Data.
Inductive principles of the search for empirical dependencies.
On the uniform convergence of relative frequencies of events to their probabilities.
Necessary and sufficient conditions for uniform convergence of means to mathematical expectations.
--TR
A lower bound for 0,1, * tournament codes
Learnability and the Vapnik-Chervonenkis dimension
Inductive principles of the search for empirical dependences (methods based on weak convergence of probability measures)
Decision theoretic generalizations of the PAC model for neural net and other learning applications
Efficient distribution-free learning of probabilistic concepts
Characterizations of learnability for classes of {0, MYAMPERSANDhellip;, <italic>n</italic>}-valued functions
A generalization of Sauer''s lemma
Bounds on the number of examples needed for learning functions
More theorems about scale-sensitive dimensions and learning
Fat-shattering and the learnability of real-valued functions
--CTR
Philip M. Long, On the sample complexity of learning functions with bounded variation, Proceedings of the eleventh annual conference on Computational learning theory, p.126-133, July 24-26, 1998, Madison, Wisconsin, United States
Martin Anthony , Peter L. Bartlett, Function Learning from Interpolation, Combinatorics, Probability and Computing, v.9 n.3, p.213-225, May 2000
Massimiliano Pontil, A note on different covering numbers in learning theory, Journal of Complexity, v.19 n.5, p.665-671, October
John Shawe-Taylor , Robert C. Williamson, A PAC analysis of a Bayesian estimator, Proceedings of the tenth annual conference on Computational learning theory, p.2-9, July 06-09, 1997, Nashville, Tennessee, United States
John Shawe-Taylor , Nello Cristianini, Further results on the margin distribution, Proceedings of the twelfth annual conference on Computational learning theory, p.278-285, July 07-09, 1999, Santa Cruz, California, United States
Olivier Bousquet , Andr Elisseeff, Stability and generalization, The Journal of Machine Learning Research, 2, p.499-526, 3/1/2002
Shahar Mendelson, On the size of convex hulls of small sets, The Journal of Machine Learning Research, 2, p.1-18, 3/1/2002
Tong Zhang, Covering number bounds of certain regularized linear function classes, The Journal of Machine Learning Research, 2, p.527-550, 3/1/2002
Don Hush , Clint Scovel, Fat-Shattering of Affine Functions, Combinatorics, Probability and Computing, v.13 n.3, p.353-360, May 2004
Don Hush , Clint Scovel, On the VC Dimension of Bounded Margin Classifiers, Machine Learning, v.45 n.1, p.33-44, October 1 2001
Martin Anthony, Generalization Error Bounds for Threshold Decision Lists, The Journal of Machine Learning Research, 5, p.189-217, 12/1/2004
Shahar Mendelson , Petra Philips, On the Importance of Small Coordinate Projections, The Journal of Machine Learning Research, 5, p.219-238, 12/1/2004
Kristin P. Bennett , Nello Cristianini , John Shawe-Taylor , Donghui Wu, Enlarging the Margins in Perceptron Decision Trees, Machine Learning, v.41 n.3, p.295-313, Dec. 2000
Philip M. Long, Efficient algorithms for learning functions with bounded variation, Information and Computation, v.188 n.1, p.99-115, 10 January 2004
John Shawe-Taylor , Peter L. Bartlett , Robert C. Williamson , Martin Anthony, A framework for structural risk minimisation, Proceedings of the ninth annual conference on Computational learning theory, p.68-76, June 28-July 01, 1996, Desenzano del Garda, Italy
Barbara Hammer, Generalization Ability of Folding Networks, IEEE Transactions on Knowledge and Data Engineering, v.13 n.2, p.196-206, March 2001
Alberto Bertoni , Carlo Mereghetti , Beatrice Palano, Small size quantum automata recognizing some regular languages, Theoretical Computer Science, v.340 n.2, p.394-407, 27 June 2005
Andrs Antos , Balzs Kgl , Tams Linder , Gbor Lugosi, Data-dependent margin-based generalization bounds for classification, The Journal of Machine Learning Research, 3, p.73-98, 3/1/2003
Yiming Ying , Ding-Xuan Zhou, Learnability of Gaussians with Flexible Variances, The Journal of Machine Learning Research, 8, p.249-276, 5/1/2007
Bernhard Schlkopf , Alexander J. Smola, A short introduction to learning with kernels, Advanced lectures on machine learning, Springer-Verlag New York, Inc., New York, NY,
Shahar Mendelson, A few notes on statistical learning theory, Advanced lectures on machine learning, Springer-Verlag New York, Inc., New York, NY,
Shahar Mendelson, Learnability in Hilbert spaces with reproducing kernels, Journal of Complexity, v.18 n.1, p.152-170, March 2002
Bin Zou , Luoqing Li, The performance bounds of learning machines based on exponentially strongly mixing sequences, Computers & Mathematics with Applications, v.53 n.7, p.1050-1058, April, 2007 | uniform laws of large numbers;vapnik-chervonenkis dimension;PAC learning |
264145 | Software Reuse by Specialization of Generic Procedures through Views. | AbstractA generic procedure can be specialized, by compilation through views, to operate directly on concrete data. A view is a computational mapping that describes how a concrete type implements an abstract type. Clusters of related views are needed for specialization of generic procedures that involve several types or several views of a single type. A user interface that reasons about relationships between concrete types and abstract types allows view clusters to be created easily. These techniques allow rapid specialization of generic procedures for applications. | Introduction
Reuse of software has the potential to reduce cost, increase the speed of software production,
and increase reliability. Facilitating the reuse of software could therefore be of great benefit.
G. S. Novak, Jr. is with the Department of Computer Sciences, University of Texas, Austin,
An Automatic Programming Server demonstration of this software is available on the World Wide Web via
http://www.cs.utexas.edu/users/novak; running the demo requires X windows.
Rigid treatment of type matching presents a barrier to reuse. In most languages, the
types of arguments of a procedure call must match the types of parameters of the procedure.
For this reason, reuse is often found where type compatibility occurs naturally, i.e. where
the types are basic or are made compatible by the language (e.g. arrays of numbers).
A truly generic procedure should be reusable for any reasonable implementation of its
abstract types; a developer of a generic should be able to advertise "my program will work
with your data" without knowing what the user's data representation will be. We seek reuse
without conformity to rigid standards. We envision two classes of users: developers, who
understand the details of abstract types and generic procedures, and users, programmers
who reuse the generics but need less expertise and understanding of details. Developers
produce libraries of abstract types and generics; by specializing the generics, users obtain
program modules for applications.
A view provides a mapping between a concrete type and an abstract type 1 in terms of
which a generic algorithm is written. Fig. 1 illustrates schematically that a view acts as
an interface adapter that makes the concrete type appear as the abstract type 2 . The view
provides a clean separation between the semantics of data (as represented by the abstract
type) and its implementation, so that the implementation is minimally constrained. Once
a view has been made, any generic procedure associated with the abstract type can be
automatically specialized for the concrete type, as shown in Fig. 2. In our implementation,
the specialized procedure is a Lisp function; if desired, it can be mechanically translated into
another language. Tools exist that make it easy to create views; a programmer can obtain a
specialized procedure to insert records into an AVL tree (185 lines of C) in a few minutes.
Data Procedure
Data
View
Procedure
Figure
1: Interfacing with Strong Typing and with Views
1 We consider an abstract type to be a set of basis variables and a set of generic procedures that are
written in terms of the basis variables.
Goguen [18] and others have used a similar analogy and diagram.
Concrete
Type
View Compiler
Generic
Procedure
Specialized
Procedure
Figure
2: Specialization of Generic Procedure through View
This approach to reuse has several advantages:
1. It provides freedom to select the implementation of data; data need not be designed
ab initio to match the algorithms.
2. Several views of a data structure can correspond to different aspects of the data.
3. Several languages are supported. Lisp, C, C++, Java, or Pascal can be generated from
a single version of generic algorithms.
4. Tools simplify the specification of views and reduce the learning required to reuse
software.
5. Views can be used to automatically:
(a) specialize generic procedures from a library,
(b) instantiate a program framework from components,
(c) translate data from one representation to another,
(d) generate methods for object-oriented programming, and
(e) interface to programming tools for data display and editing.
This paper describes principles of views and specialization of generic algorithms, as well
as an implementation of these techniques using the GLISP language and compiler; GLISP
is a Lisp-based language with abstract data types.
Section 2 describes in conceptual terms how views provide mappings between concrete
types and abstract types. Section 3 describes the GLISP compiler and how views are used in
specializing generic algorithms. Section 4 discusses clusters of related views that are needed
to reuse generic algorithms that involve several types or several views of a type. Section 5
describes the program VIEWAS, which reasons about relations between concrete types and
abstract types and makes it easy to create view clusters. Section 6 describes higher-order
code and a generic algorithm for finding a convex hull that uses other generics and uses
several views of a single data structure. Section 7 describes use of views with object-oriented
programming. Section 8 surveys related work, and Section 9 presents conclusions.
2.1 Computation as Isomorphism
It is useful to think of all computation as simulation or, more formally, as isomorphism.
This idea has a long history; for example, isomorphism is the basis of denotational semantics
[20]. Goguen [18] [19] describes views as isomorphisms, with mappings of unary and binary
operators. Our views allow broader mappings between concrete and abstract types and
include algorithms as well as operators. We use isomorphism to introduce the use of views.
Preparata and Yeh [56] give a definition and diagram for isomorphism of semigroups:
Given two semigroups G 1
an invertible function
is said to be an isomorphism between G 1
and G 2
if, for every a
and b in S, '(a
a
a
'(a) \Lambda '(b)
'(a)
oe
\Gamma\Psi
\Gamma\Psi
Since ' is invertible, there is a computational relationship: a
a ffi b is difficult to obtain directly, its value can be computed by encoding a and b using ',
performing the computation '(a) \Lambda '(b), and decoding or interpreting the result using the
diagram is said to commute [2] if the same result is obtained regardless of
which path between two points is followed, as shown in the diagram above.
2.2 Views as Isomorphisms
Reuse of generic algorithms through views corresponds to computation as isomorphism. The
concrete type corresponds to the left-hand side of an isomorphism diagram; the view maps
the concrete type to the abstract type. The generic algorithm corresponds to an operation on
the abstract type. By mapping from the concrete type to the abstract type, performing the
operation on the abstract type, and mapping back, the result of performing the algorithm on
the concrete type is obtained. However, instead of performing the view mapping explicitly
Concrete
Concrete
Type
Abstract
Generic
Algorithm
Abstract
Type
view
view
oe
Compilation,
Optimization
Concrete
Concrete
Type
Specialized
Algorithm
Figure
3: Specializing a Generic
and materializing the abstract data, the mappings are folded into the generic algorithm to
produce a specialized version that operates directly on the concrete data (Fig. 3).
As an example, let concrete type pizza contain a value d that represents the diameter
of the (circular) pizza. Suppose abstract type circle assumes a radius value r. A view
from pizza to circle will specify that r corresponds to d=2. A simple generic procedure
that calculates the area of a circle can then be specialized by compilation through the view.
Because the view mapping is folded into the generic algorithm and optimized, the specialized
algorithm operates directly on the original data and, in this case, does no extra computation
(Fig. 4). Code to reference d in data structure pizza is included in the specialized code.
area
pizza
area
circle
Compilation,
Optimization
area
pizza
Figure
4: Example Specialization
2.3
Abstract
Data Types and Views
An abstract type is an abstraction of a set of concrete types; it assumes an abstract record
containing a set of basis variables 3 and has a set of named generic procedures that are written
in terms of the basis variables. 4
Any data structure that contains the basis variables, with the same names and types,
implements the abstract type. To maximize reuse, constraints on the implementation must
be minimized: it should be possible to specialize a generic procedure for any legitimate
implementation of its abstract type.
diameter
radius
pizza-as-circle
pizza
Figure
5: Encapsulation of Concrete Type by View
A view encapsulates a concrete type and presents an external interface that consists of
the basis variables of the abstract type. Fig. 5 illustrates how view type pizza-as-circle
encapsulates pizza and presents an interface consisting of the radius of abstract type
circle. The radius is implemented by dividing field diameter of pizza by 2; other fields
of pizza are hidden. Code for this example is shown in the following section. In the general
case, the interface provides the ability to read and write each basis variable. A read or write
may be implemented as access to a variable of the concrete record or by a procedure that
emulates the read or write, using the concrete record as storage. 5
A view implements an abstract type if it emulates a record containing the basis variables.
Emulation is expressed by two properties:
1. Storage: After a value z is stored into basis variable v, reference to v will yield z.
3 Although it is not required, our abstract types usually specify a concrete record containing the basis
variables; this is useful as an example and for testing the generic procedures.
4 This definition of abstract type is different from the algebraic description of abstract types [11] as a
collection of abstract sorts, procedure signatures, and axioms. In the algebraic approach, an abstract type
is described without regard to its implementation. In our approach, an abstract implementation is assumed
because the abstract type has generic procedures that implement operations.
5 Views can be implemented in an object-oriented system by adapter or wrapper objects [16], where a
wrapper contains the concrete data, presents the interface of the abstract type, and translates messages
between the abstract and concrete types. Our views give the effect of wrappers without creating them.
2. Independence: If a reference to basis variable v yields the value z and a value is then
stored into some other basis variable w, a reference to v will still yield z.
These properties express the behavior expected of a record: stored values can be retrieved,
and storing into one field does not change the values of others.
If a view implements an abstract type exactly, as described by the storage and independence
properties, then any generic procedure will operate in the same way (produce the
same output and have the same side effects) when operating on the concrete data through
the view as it does when operating on a record consisting of the basis variables. That is, an
isomorphism holds between the abstract type and concrete type, and its diagram commutes.
This criterion is satisfied by the following variations of data:
1. Any record structure may be used to contain the variables. 6
2. Names of variables may differ from those of the abstract type: views provide name
translation, and the name spaces of the concrete and abstract types are distinct.
Some generics use only a subset of basis variables; only those that are used must be defined
in a view. An attempt to use an undefined basis variable is detected as an error.
A view in effect defines functions to compute basis variables from the concrete variables;
if a generic procedure is to "store into" basis variables, these functions must be invertible.
Simple functions can be inverted automatically by the compiler. For more complex cases, a
procedure can be defined to effect a store into a basis variable. The procedures required for
mathematical views may be somewhat complex: in the case of a polar vector (r; '), where
the abstract type is a Cartesian vector (x; y), an assignment to basis variable x must update
both r and ' so that x will have the new value and y will be unchanged. A program MKV
("make view") [54] allows a user to specify mathematical views graphically by connecting
corresponding parts of the concrete type and a diagram associated with the abstract type;
MKV uses symbolic algebra to derive view procedures from the correspondences.
For wider reuse, the storage and independence properties must be relaxed slightly. Even
a simple change of representation, such as division of the diameter value by 2 in the pizza
example, changes the point at which numerical overflow occurs; there could also be round-off
error. Significant changes of representation should be allowed, such as representing a vector
in polar coordinates (r; ') when the basis variables are in Cartesian coordinates (x; y). If
a polar vector is viewed as a Cartesian vector using transformations
sin('), the mapping is not exact due to round-off error, nor is it one-to-one; however,
it is sufficiently accurate for many applications. Ultimately, the user of the system must
ensure that the chosen representation is sufficiently accurate.
In some cases, a user might want to specify a contents type and let the system define a
record using it, e.g. an AVL tree containing strings. This is easily done by substituting the
contents type into a prototype record definition with the view mappings predefined.
The next section describes how views are implemented and compiled in GLISP.
6 This could include arrays, or sub-records reached by a fixed sequence of pointer traversals.
3 GLISP Language and Compiler
GLISP [46, 47, 48, 49] ("Generic Lisp"), a high-level language with abstract data types, is
compiled into Lisp; it has a language for describing data in Lisp and in other languages.
GLISP is described only briefly here; for more detail, see [49] and [46].
3.1 Data-independent Code
A GLISP type is analogous to a class in object-oriented programming (OOP); it specifies
a data structure and a set of methods. For each method, there is a name (selector) and a
definition as an expression or function. As in OOP, there is a hierarchy of types; methods
can be inherited from ancestor types. The methods of abstract types are generic procedures.
In most languages, the syntax of program code depends on the data structures used; this
prevents reuse of code for alternative implementations of data. GLISP uses a single Lisp-like
syntax. In Lisp, a function call is written inside parentheses: (sqrt x). A similar syntax
(feature object) is used in GLISP to access any feature of a data structure [49]:
1. If feature is the name of a field of the type of object, data access is compiled.
2. If feature is a method name (selector) of the type of object, the method call is compiled.
3. If feature is the name of a view of the type of object, the type of object is locally
changed to the view type.
4. If feature is a function name, the code is left unchanged.
5. Otherwise, a warning message is issued that feature is undefined.
This type-dependent compilation allows variations in data representation: the same code
can be used with data that is stored for one type but computed for another type. For
example, type circle can assume radius as a basis variable, while a pizza object can store
diameter and compute radius.
The GLISP compiler performs type inference as it compiles expressions. When the type
of an object is known at compile time, reference to a feature can be compiled as in-line code
or as a call to a specialized generic. Specialized code depends on the types of arguments
to the generic. Compilation by in-line expansion and specialization is recursive at compile
time and propagates types during the recursion; this is an important feature. Recursive
expansion allows a small amount of source code to expand into large output code; it allows
generic procedures to use other generics as subroutines and allows higher-order procedures
to be expanded through several levels of abstraction until operations on data are reached.
Symbolic optimization folds operations on constants [62], performs partial evaluation [7]
[12] and mathematical optimization, removes dead code, and combines operations to improve
efficiency. It provides conditional compilation, since a conditional is eliminated when the
test can be evaluated at compile time. Optimization often eliminates operations associated
with a view, so that use of the view has little or no cost after compilation.
3.2 Views in GLISP
A view [46, 49, 50] is expressed as a GLISP type whose record is the concrete type. The
abstract type is a superclass of the view type, allowing generics to be inherited 7 . The view
type encapsulates the concrete type and defines methods to compute basis variables of the
abstract type. As specialized versions of generics are compiled, the compiler caches them
in the view type. Examples of abstract type circle, concrete type pizza, and view type
pizza-as-circle are shown below; each gives the name of the type followed by its data
structure, followed by method (prop), view, and superclass specifications.
(circle (list (center vector) (radius real))
(pizza (cons (diameter real) (topping symbol))
views ((circle pizza-as-circle)) )
(pizza-as-circle (p pizza)
supers (circle))
pizza-as-circle encapsulates pizza and makes it appear as a circle; its record is named
p and has type pizza. It defines basis variable radius as the diameter of p divided by 2 and
specifies circle as a superclass; it hides other data and methods of pizza 8 . The following
example shows how area defined in circle is compiled through the view; GLISP function
t1 is shown followed by compiled code in Lisp.
(gldefun t1 (pz:pizza) (area (circle pz)))
result type: REAL
The code (circle pz) changes the type of pz to the view type pizza-as-circle. The area
method is inherited from circle and expanded in-line; basis variable radius is expanded
using diameter, which becomes a data access (CAR PZ).
If a view defines all basis variables in terms of the concrete type, then any generic
procedure of the abstract type can be used through the view. Because compilation by
GLISP is recursive, generic procedures can be written using other generics as subroutines, as
7 Only methods are inherited; data structures, and therefore state variables, are not.
8 pizza-as-circle fails to define the basis variable center; this is allowable. An attempt to reference
an undefined basis variable is detected as an error.
long as the recursion terminates at compile time. 9 A view type may redefine some methods
that are generics of the abstract type; this may improve efficiency. For example, a Cartesian
vector defines magnitude as a generic, but this value is stored as r in a polar (r; ') vector.
When a basis variable is assigned a value, the compiler produces code as follows:
1. If the basis variable corresponds to a field of the concrete type, a store is generated.
2. If the basis variable is defined by an expression that can be inverted algebraically, the
compiler does so. For example, assigning a value r to the radius of a pizza-as-circle
causes r 2 to be stored into the diameter of record pizza.
3. A procedure can be defined in the view type to accomplish assignment to a basis variable
while maintaining the storage and independence properties. MKV [54] produces
such procedures automatically.
A view can define a procedure to create an instance of the concrete type from a set of
basis variables of the abstract type [54]. This is needed for generics that create new data,
e.g. when two vectors are added to produce a new vector.
Several points about views are worth noting:
1. In general, it is not the case that an object is its view; rather, a view represents some
aspect of an object. The object may have other data that are not involved in the view.
2. A view provides name translation. This removes any necessity that concrete data use
particular names and eliminates name conflicts.
3. A view can specify representation transformation.
4. There can be several ways of viewing a concrete type as a given abstract type. For
example, the same records might be sorted in several ways for different purposes.
4 Clusters of Views
Several languages (e.g. Ada, Modula-2, ML, and C++) provide a form of abstract data type
that is like a macro: an instance is formed by substituting a concrete type into it, e.g. to
make a linked list whose contents is a user type. This technique allows only limited software
reuse. We seek to extend the principle that a generic should be reusable for any reasonable
implementation of its data to generics that involve several abstract types.
Some data structures that might be regarded as a single concept, such as a linked list,
involve several types: a linked list has a record type and a pointer type. Many languages
finesse the need for two types by providing a pointer type that is derived from the record
9 Recursion beyond a certain depth is trapped and treated as a compilation error.
type. In general, however, a pointer can be any data that uniquely denotes a record in a
memory: a memory address, a disk address, an array index, an employee number, etc. To
maximize generality, the record and pointer must be treated as distinct types.
A view maps a single concrete type to a single abstract type. A cluster collects a set of
views that are related because they are used in a generic algorithm. For example, a polygon
can be represented as a sequence of points; the points could be Cartesian, polar or some
type that could be viewed as a point (e.g. a city), and the sequence could be a linked list,
array, etc. There should not be a different generic for each combination of types; a single
generic should be usable for any combination. A cluster collects the views used by a generic
algorithm in a single place, allows inheritance and specialization of generics through the
views, and is used in type inference.
A cluster has a set of roles, each of which has a name and a corresponding view type;
for example, cluster linked-list has roles named record and pointer. A cluster may
have super-clusters; each view type that fills a role specifies as a superclass the type that
fills the same role in the super-cluster, allowing inheritance of methods from it. The view
types also define methods or constants 10 needed by generic procedures; for example, cluster
sorted-linked-list requires specification of the field or property of the record on which
to sort and whether the sort is ascending or descending.
4.1 Example Cluster: Sorted Linked List
This section gives an example record, shows how a cluster is made for it using VIEWAS, and
shows how a generic is specialized. We begin by showing the user interaction with VIEWAS
to emphasize its ease of use; a later section explains how VIEWAS works.
The C structure of example record myrec and its corresponding GLISP type are shown
below. 11
struct myrecord -
int
char *name;
int
struct myrecord *next;
(myrec (crecord myrec
(color integer)
(name string)
(next (-
A constant is specified as a method that returns a constant value.
11 The GLISP type could be derived automatically from the C declaration, but this is not implemented.
Suppose the user wishes to view myrec as a sorted-linked-list and obtain specialized
versions of generics for an application. The user invokes VIEWAS to make the view cluster:
(viewas 'sorted-linked-list 'myrec)
VIEWAS determines how the concrete type matches the abstract type; it makes some
choices automatically and asks the user for other choices: 12
Choice for
Specify choice for SORT-VALUE
Choices are: (COLOR NAME SIZE)
name
Specify choice for SORT-DIRECTION
Choices are: (ASCENDING DESCENDING)
ascending
VIEWAS chooses field next as the link of the linked-list record since it is the only
possibility; it asks the user for the field on which to sort and the direction of sorting. VIEWAS
requires only a few simple choices by the user; the resulting cluster MYREC-AS-SLL and two
view types are shown in Fig. 6. Cluster MYREC-AS-SLL has roles named pointer and record
that are filled by corresponding view types; MYREC-AS-SLL lists cluster SLL (sorted linked
list) as a super-cluster.
View type MYREC-AS-SLL-POINTER is a pointer to view type MYREC-AS-SLL-RECORD; it
has the corresponding type SLL-POINTER of cluster SLL as a superclass. The generics of SLL
are defined as methods of SLL-POINTER. View type MYREC-AS-SLL-RECORD has data named
Z16 13 of type MYREC; it lists type SLL-RECORD as a superclass and defines the LINK and
After making a view cluster, the user can obtain specialized versions of generics. We do
not expect that a user would read the code of generics or the specializations derived from
them, but we present a generic and its specialization here to illustrate the process.
Fig. 7 shows generic sll-insert; it uses generics rest defined for linked-list (the
value of field link) and sort-before. The notation (-. ptr) is short for dereference
of pointer ptr. sort-direction is tested in this code; however, since this is constant at
compile time, the compiler eliminates the if and keeps only one sort-before test, which is
expanded depending on the type of sort-value.
converts symbols to upper-case, so upper-case and lower-case representations of the same symbol
are equivalent. In general, user input is shown in lower-case, while Lisp output of symbols is upper-case.
13 Names with numeric suffixes are new, unique names generated by the system. The unique name Z16
encapsulates MYREC and prevents name conflicts in the view type because features of MYREC can be accessed
only via that name.
(GLCLUSTERDEF
(ROLES ((POINTER MYREC-AS-SLL-POINTER)
(RECORD MYREC-AS-SLL-RECORD))
View type MYREC-AS-SLL-POINTER:
GLCLUSTER MYREC-AS-SLL
View type MYREC-AS-SLL-RECORD:
RESULT
lst
else new)))
Figure
7: Generic: Insert into Sorted Linked List
4.2 Uses of Clusters
Clusters serve several goals:
1. Clusters allow independent specification of the several views used in a generic.
2. A generic that performs a given function should be written only once; generics should
reuse other generics when possible. Clusters allow generics to be inherited.
3. Clusters are used to derive the correct view types as generics are specialized.
4.2.1 Inheritance through Clusters
It is desirable to inherit and reuse generics when possible. In some cases, a cluster can be
considered to be a specialization of another cluster, e.g. sorted-linked-list is a specialization
of linked-list. Some generics defined for linked-list also work for a sorted-linked-list: the
length of a linked-list is the same whether it is sorted or not. Generics should be defined
at the highest possible level of abstraction to facilitate reuse.
A cluster can specify super-clusters. Fig. 9 shows the inheritance among clusters for the
MYREC-AS-SLL example; each cluster can inherit generics from the clusters above.
The mechanism of inheritance between clusters is simply inheritance between types. Each
type that fills a role in a cluster lists as a superclass the type that fills the corresponding
MYREC *sll-insert-1 (lst, new)
MYREC *lst, *new;
MYREC *ptr, *prev;
while ( ptr &&
strcmp(ptr-?name, new-?name)
if (prev !=
return lst;
else
return new;
Figure
8: Specialized Procedure in C
role in the super-cluster. These inheritance paths are specified manually for abstract types;
VIEWAS sets up the inheritance paths when it creates view clusters.
Inheritance provides defaults for generic procedures and constants. For example, generics
of sorted-linked-list use predicate sort-before to compare records; a generic sort-before
is defined as ! applied to the sort-value of records. Predicate !, in turn, depends on the
type of the sort-value, e.g., string comparison is used for strings. A minimal specification of
a sorted-linked-list can use the default sorting predicate, but an arbitrary sort-before
predicate can be specified in the record view type if desired.
In some cases, inheritance of generics from super-clusters should be preventable. For
example, nreverse (destructive reversal of the order of elements in a linked list) is defined
for linked-list but should not be available for a sorted-linked-list, since it destroys the sorting
order. Prevention of inheritance can be accomplished by defining a method as error, in
which case an attempt to use the method is treated as a compilation error.
record-with-pointer
linked-list
sll (sorted-linked-list)
Figure
9: Inheritance between Clusters
4.2.2 Type Mappings
A cluster specifies a set of related types. A generic procedure is specified in terms of
abstract types, but when it is specialized, the corresponding view types must be sub-
stituted. For example, at the abstract level, a linked-list-record contains field link
whose type is linked-list-pointer, and dereference of a linked-list-pointer yields
a linked-list-record. When a generic defined for linked-list is specialized, these types
must be replaced by the corresponding view types of the cluster. Care is required in defining
generics and view types to ensure that each operation produces a result of the correct type;
otherwise, the generic will not compile correctly when specialized.
In general, if a generic function f : a 1
, with abstract argument and result types
is specialized for concrete types t 1
and t 2
using views v 1
, the
specialized function must have signature f s
Smith [68] uses the term
theory morphism for a similar notion. Dijkstra [15] uses the term coordinate transformation
for a similar notion, in which some variables are replaced by others; Gries [23] uses the term
coupling invariant for the predicate that describes the relation (between the abstract types
or variables and their concrete counterparts) that is to be maintained by functions. Consider
the following generic:
(gldefun generic-cddr (l:linked-list)
(rest (rest l)) )
generic-cddr follows the link of a linked-list record twice: rest is the link value. 14 Now
suppose that a concrete record has two pointer fields, so that two distinct linked-list clusters
can be made using the two pointer fields. To specialize generic-cddr for both, rest must
produce the view type, which defines the correct link, rather than the concrete type.
Fig. abstractly illustrates type mappings as used in the generic-cddr example. The
figure shows concrete types t i that are viewed as abstract types a i by views v i . Suppose
14 rest (or cdr) and cddr are Lisp functions; we use Lisp names for linked-list generics that are similar.
a 1
a 2
a 3
f
s
Figure
10: Cluster: Views between Type Families
is composed with function g : a 2
. The corresponding
specialized functions will be f s
Because the views
are virtual and operations are actually performed on the concrete data, the compiled code
will perform g s
. However, the result of function f s
, as seen by the compiler, must
be
because function g is defined for abstract type a 2
and is inherited
by
but is undefined for concrete type t 2
The roles of a cluster are used within generics to specify types that are related to a known
view type. Each view type has a pointer to its cluster, and the cluster's roles likewise point
to the view types; therefore, it is possible to find the cluster from a view type and then
to find the view type corresponding to a given role of that cluster. The GLISP construct
(clustertype role code) returns at compile time the type that fills role in the cluster to
which the type of code belongs; this construct can be used as a type specifier in a generic,
allowing use of any view type of the cluster. For example, a generic's result type can be
declared to be the view type that fills a specified role of the cluster to which an argument
of the generic belongs. clustertype can also be used to declare local variable types and to
create new data of a concrete type through its view. Thus, type signatures and types used
within generics can be mapped from the abstract level to the view level so that specialization
of generics is performed correctly.
5 View Cluster Construction: VIEWAS
A view cluster may be complex, and detailed knowledge of the generic procedures is needed
to specify one correctly. We expect that abstract types and view clusters will be defined
by experts; however, it should be simple for programmers to reuse the generics. VIEWAS
makes it easy to create view clusters without detailed understanding of the abstract types
and generics. Its inputs are the name of the view cluster and concrete type(s). VIEWAS
determines correspondences between the abstract types of the cluster and the concrete types,
asking questions as needed; from these it creates the view cluster and view types.
(gldefviewspec
'(sorted-linked-list (sorted-sequence)
sll t
((record anything))
((type pointer (- record))
(prop link (partof record pointer) result pointer)
(prop copy-contents-names (names-except record (link)) )
(prop sort-value (choose-prop-except record (link)))
(prop sort-direction (oneof (ascending descending))
((pointer pointer)
(record record
prop (link copy-contents-names sort-value
Figure
11: View Specification used by VIEWAS
Fig. 11 shows the view specification for a sorted linked list. ((record anything)) is a
list of formal parameters that correspond to types given in the call to VIEWAS; argument
record can have a type matching anything. Next is a list of names and specifications
that are to be matched against the concrete type; following that is a pattern for the output
cluster, which is instantiated by substitution of the values determined from the first part.
Finally, there is a list of super-clusters of the cluster to be created, (sll).
The previous example, (viewas 'sorted-linked-list 'myrec), specifies the name of
the target cluster and the concrete type myrec to be matched. VIEWAS first matches the
record with the myrec argument; then it processes the matching specifications in order:
1. (type pointer (- record))
The first thing to be determined is the type of the pointer to the record. The pointer
defaults to a standard pointer to the record, but a different kind of pointer, such as an
array index, will be used if it has been defined.
2. (prop link (partof record pointer) result pointer)
The link must be a field of the record, of type pointer. Type filtering restricts the
possible matches; if there is only one, it is selected automatically.
3. (prop copy-contents-names (names-except record (link)) )
These are the names of all fields of the record other than the link; the names are used
by generics that copy the contents of a record.
4. (prop sort-value (choose-prop-except record (link)) )
The sort-value is compared in sorting; it is chosen from either fields or computed
(method) values defined for the record type, excluding the field that is the link. A
menu of choices is presented to the user.
5. (prop sort-direction (oneof (ascending descending)) )
This must be ascending or descending; the user is asked to choose.
After the items have been matched with the concrete type, the results are substituted into
a pattern to form the view type cluster. Fig. 6 above shows cluster myrec-as-sll and view
types myrec-as-sll-pointer and myrec-as-sll-record produced by VIEWAS. Properties
needed by generics of sorted-linked-list, such as sort-value, are defined in terms of the
concrete type. Generics defined for sorted-linked-list explicitly test sort-direction;
since this value is constant, only the code for the selected direction is kept. This illustrates
that switch values in view types can select optional features of generics. Weide [73] notes
that options in reusable procedures are essential to avoid a combinatoric set of versions.
For example, the Booch component set [9] contains over 500 components; Batory [5] has
identified these as combinations of fewer design decisions.
The linked-list library has 28 procedures; one view cluster allows specialization of any
of them. VIEWAS requires minimal input; it presents sensible choices, so the user does not
need to understand the abstract types in detail. In effect, view specifications use a special-purpose
language that guides type matching; this language is not necessarily complete, but
is sufficient for a variety of view clusters. Some specifications prevent type errors and often
allow a choice to be made automatically, as in the case of the link field. Others, e.g.
copy-contents-names, perform bookkeeping to reduce user input. Specifications such as
that for sort-value heuristically eliminate some choices; additional restrictions might be
helpful, e.g., sort-value could require a type that has an ordering. VIEWAS is not a
powerful type matcher, but it tends to be self-documenting, eliminates opportunities for
errors, and is effective with minimal input. We assume that the user understands the
concrete type and understands the abstract types well enough to make the choices presented.
VIEWAS is intended for views of data structures; a companion program MKV [54] uses a
graphical interface and algebraic manipulation of equations to make mathematical views.
We have also investigated creation of programs from connections of diagrams that represent
physical and mathematical models [51].
6 Higher-order Code
6.1 Compound Structures
Abstract types may be used in larger structures. For example, several kinds of queue can be
made from a linked-list: front-pointer-queue, with a pointer to the front of a linked list,
two-pointer-queue, with pointers to the front and the last record, and end-pointer-queue,
with a pointer to the last record in a circularly linked list. A sequence of queue, in turn,
can be used for a priority-queue. Generics for compound structures are often small and
elegant; for example, insertion in a priority queue is:
(gldefun priority-queue-insert
(q:priority-queue n:integer new)
(insert (index q n) new) )
The code (index q n) indexes the sequence by priority n to yield a queue. insert is
interpreted relative to the type of queue. This small function expands into larger code
because its operations expand into operations on component structures, which are further
expanded. A single definition of a generic covers the combinatoric set of component types.
6.2 Generic Loop Macros
A language with abstract types must provide loops over collections of data. Alphard [66]
and CLU [38] allow iterators for concrete types; Interlisp [30] provided a flexible looping
construct for Lisp lists. SETL [14] provides sets and maps and compiles loops over them,
with implementations chosen by the compiler [63]. Generic procedures need loops that are
independent of data structure (e.g., array, linked list, or tree); this is done by loop macros.
Expansion of generic procedures obeys strict hierarchy and involves independent name
spaces. In expanding a loop, however, code specified in the loop statement must be interspersed
with the code of the iterator, and the iterator must introduce new variables at the
same lexical level; macros are used for these reasons. Names used by the macro are changed
when necessary to avoid name conflicts.
GLISP provides a generic looping statement of the form:
(for item in sequence [when p(item)] verb f(item) )
When this statement is compiled, iterator macros defined for the verb and for the type of
sequence are expanded in-line. For example, consider:
(gldefun t3 (r:myrec)
(for x in (sorted-linked-list r)
sum (size x)))
This loop iterates through a sequence r using its sorted-linked-list view and inheriting
the linked-list iterator; it sums the size of each element x of the linked list. Macros are
provided for looping, summation, max, min, averaging, and statistics. Collection macros
collect data in a specified form, making it possible to convert from one kind of collection to
another (e.g., from an array to a linked list).
Data structures may have several levels of structure. For example, a symbol table might
be constructed using an array of buckets, where the array is indexed by the first character of
a symbol and each array element is a pointer to a bucket, i.e., a sorted linked list of symbols.
A double-iterator macro is defined that composes two iterators, allowing a loop over a
compound structure:
(gldefun t4 (s:symbol-table)
(for sym in s sum (name sym)))
t4 concatenates the names of all the symbols. The loop expands into nested loops (first
through the array of buckets, then through each linked list) and returns a string (since a
name is a string and concatenates strings). The compiled code is 23 lines of Lisp.
6.3 Copying and Representation Change
The GLISP compiler can recursively expand code through levels of abstraction until operations
on data are reached; it interprets code relative to the types to which it is applied. In
Lisp, funcall is used to call a function that is determined at runtime. In GLISP, a funcall
whose function argument is constant at compile time is treated like other function calls, i.e.,
it is interpreted relative to its argument types. This makes it possible to write higher-order
code that implements compositions of views.
The contents of a linked-list record may consist of several items of different types. Generic
copy-list makes a new record and copies the contents fields into it, requiring several
assignments. This is accomplished by a loop over the copy-contents-names defined in
the view type. For each name in copy-contents-names, a funcall of that name on the
destination record is assigned the value of a funcall of that name on the source record.
(for name in (copy-contents-names (-. l)) do
((funcall name (implementation (-. l)))
:= (funcall name (implementation (-. m)))
Since the list of names is constant, the loop is unrolled. Each funcall then has a constant
name that is interpreted as a field or method reference; the result is a sequence of assignments.
Since the "function call" on each side of the assignment statement is interpreted relative
to the type to which it is applied, this higher-order code can transfer data to a different record
type and can change the representation during the transfer, e.g., by converting the radius of
a circle in one representation to the area in another representation, or by converting the data
to reflect different representations or units of measurement [53]. For example, consider two
different types cira and cirb, each of which has a color and lists circle as a superclass:
(cira (cons (color symbol)
(cons (nxt (- cira))
(radius
supers (circle))
(cirb (list (diameter roman)
(color
(next (- cirb)))
prop ((radius (diameter / 2)))
supers (circle))
These types have different records, and cira contains an integer radius while cirb contains
diameter represented in roman numerals. After viewing each type as a linked-list, it is
possible to copy a list from either representation to the other. This illustrates how higher-order
code is expanded. First, the loop is unrolled into two assignment statements that
transfer color and diameter from source record to destination record; then diameter is
inherited from circle for the source record and encoded into Roman numerals for the
destination record:
(gldefun t5 (u:cira &optional v:cirb)
(copy-list-to (linked-list u) (linked-list v)))
(t5 '(RED (GREEN (BLUE NIL . 12) . 47) . 9))
6.4 Several Views
Viewing concrete data as a conceptual entity may involve several views; e.g., a polygon can
be represented as a sequence of points. Viewing a concrete type as a polygon requires a view
of the concrete type t 1
as a sequence of some type t 2
and a view of t 2
as a vector (Fig. 12).
View
from the element of the concrete sequence to a vector is specified declaratively
by giving the name of the view. This view name is used in a funcall inside the polygon
procedures; it acts as a type change function that changes the type of the sequence element
to its view as a point. This effectively implements composition of views. A single generic can
be specialized for a variety of polygon representations. For example, a string of characters
can be viewed as a polygon by mapping a character to its position on a keyboard: does the
string "car" contain the character "d" on the keyboard?
vector
iterator iterator
Figure
12: Polygon: Sequence of Vector
6.5 Application Languages
The Lisp output of our system can be mechanically transformed into other languages,
including C, C++, Java, and Pascal. GLISP allows a target language record as a type;
accesses to such records are compiled into Lisp code that can be transformed into target
language syntax. The code can also run within Lisp to create and access simulated records;
this allows the interactive Lisp environment and our programming and data-display tools to
be used for rapid prototyping.
Conversion of Lisp to the target language is done in stages. Patterns are used to transform
idioms into corresponding target idioms and to transform certain Lisp constructs
(e.g., returning a value from an if statement) into constructs that will be legal in the
target language. These Lisp-to-Lisp transformations are applied repeatedly until no further
transformations apply. A second set of patterns transforms the code into nicely-formatted
target language syntax. The result may be substantially restructured.
A C procedure sll-insert-1 was shown in Fig. 8. This code is readable and has no
dependence on Lisp. Versions of generic procedures containing a few hundred lines of code
have been created in C, C++, Java, and Pascal. The C version of the convex hull program,
described below, runs 20 times faster than the Lisp version.
6.6 A Larger Example: Convex Hull
The convex hull of a set of points is the smallest convex polygon that encloses them. Kant
studied highly qualified human subjects who wrote algorithms for this task. All subjects
took considerable time; some failed or produced an inefficient solution. Although convex
hull algorithms are described in textbooks [64] and in the literature, getting an algorithm
from such sources is difficult: it is necessary to understand the algorithm, and a published
description may omit details of the algorithm or even contain errors [22]. A hand-coded
version of a published algorithm requires testing or verification.
Fig. 13 illustrates execution of a generic convex hull algorithm. We describe the algorithm
Figure
13: Convex Hull of Points
and illustrate its use on cities viewed as points. The algorithm uses several views of the same
data and reuses other generics; it is similar to the QUICKHULL algorithms [57].
A convex hull is represented as a circularly linked list of vertex points in clockwise order
(Fig. 14). An edge is formed by a vertex and its successor. Associated with a vertex is a
list of points that may be outside the edge; an edge is split (Fig. 15) by finding the point
that is farthest to the left of the edge. If there is such a point, it must be on the convex
hull. The edge is split into two edges, from the original vertex to the new point and from the
new point to the end vertex, by splicing the new point into the circularly linked list. The
subsets of the points that are to the left of each new edge are collected and stored with the
corresponding vertex points, and the edges are split again.
The algorithm is initialized by making the points with minimum and maximum x values
into a two-point polygon with all input points associated with each edge; then each edge is
split. Finally, the vertices are collected as a (non-circular) linked list. Fig. 13 shows the
successive splittings. The algorithm rapidly eliminates most points from consideration.
Fig. 16 shows the type cluster used for convex hull. The line formed by a point and
its successor is declared as a virtual line-segment [46]; this is another way of specifying a
view. This allows the polygon to be treated simultaneously as a sequence of vertices and as
a sequence of edges. Only vertices are represented, but the algorithm deals with edges as
well. The internal-record specifies circular-linked-list under property viewspecs;
this causes VIEWAS to be called automatically to make the circular-linked-list view
\Gamma'
A
A
A
A
A
A
A
A AU
Xy
s
s
s
s
s
s
Figure
14: Convex Hull as Circular Linked List of Points
s
s
\Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi \Phi*
d
A AU
A
s
s
s
s
s
Figure
15: Splitting an Edge
so that procedure splice-in and the iterator of that view can be used.
Fig. 17 shows generic procedure convex-hull. This procedure initializes the algorithm
by finding the two starting points; use of iterators min and max simplifies the code. Next,
an initial circularly linked list is made by linking together the starting points, and function
split is called for each. Finally, the vertex points are collected as a non-circular list.
Fig. shows generic cvh-split, which uses iterator max and signed leftof-distance
from a line-segment to a point; this generic is inherited since the line associated with a
vertex is a virtual line-segment. leftof-distance expands as in-line code that operates
directly on the linked list of vertex records; we can think of a vertex and its successor as
a line-segment without materializing one. Operator specifies a push on a list to collect
points. cvh-split also uses procedure splice-in of the circular-linked-list view of
the points. The split algorithm views the same data in three different ways: as a vertex
point, as an edge line-segment, and as a circularly linked list. We believe that use of several
views is common and must be supported by reuse technologies.
A programmer should not have to understand the algorithm to reuse it. The concrete
(gldefclusterc
'convex-hull-cluster
'((source-point (convex-hull-source-point vector))
(source-collection (convex-hull-source-collection
(listof convex-hull-source-point)
prop ((hull convex-hull specialize t))))
(internal-record (convex-hull-internal-record
(list (pt convex-hull-source-point)
(next (- convex-hull-internal-record))
(points (listof convex-hull-source-point)))
prop ((line ((virtual line-segment with
msg ((split cvh-split specialize t))
viewspecs
Figure
data might not be points per se and might have other attributes. To find the convex hull
using a traditional algorithm would require making a new data set in the required form,
finding the convex hull, and then making a version of the hull in terms of the original data.
Specialization of generics is more efficient. For example, consider finding the convex hull
of a set of cities, each of which has a latitude and longitude. Fig. 19 shows the city data
structure and a hand-written view as a point using a Mercator projection. VIEWAS was
used to make a convex hull view of the city-as-point data. Using this view, a specialized
version of the convex hull algorithm (229 lines of Lisp) was produced (in about 5 seconds)
that operates on the original data.
This example illustrates the benefits of our approach to reuse. The generic procedures
themselves are relatively small and easy to understand because they reuse other generics.
Reuse of the generic procedure for an application has a high payoff: the generated code is
much larger and more complex than the few lines that are entered to create the views.
6.7 Testing and Verification
Users must have confidence that reused programs will behave as intended. Programmer's
Apprentice [61] produced Ada code; the user would read this code and modify it as necessary.
We do not believe a programmer should read the output of any reuse system. With our
system, in-line code expansion and symbolic optimization can make the output code difficult
to read and to relate to the original code sources. Reading someone else's code is difficult,
and no less so if the "someone else" is a machine.
(gldefun convex-hull (orig-points:(listof vector))
(let (xmin-pt xmax-pt hullp1 hullp2)
(if ((length-up-to orig-points
then (xmin-pt := (for p in orig-points min (x p)))
with
with
((next hullp1) := hullp2) ; link circularly
((next hullp2) := hullp1)
(split hullp1)
(split hullp2)
(for p in (circular-linked-list hullp1)
collect (pt p)) )
Figure
17: Generic Convex Hull Procedure
We believe that a reuse system such as ours will reduce errors. Errors in reusing software
components might arise from several sources:
1. The component itself might be in error.
2. The component might be used improperly.
3. The specialization of a component might not be correct.
Algorithms that are reused justify careful development and are tested in many applications,
so unnoticed errors are unlikely. Humans introduce errors in coding algorithms; Sedgewick
notes "Quicksort . is fragile: a simple mistake in the implementation can go unnoticed
and can cause it to perform badly for some files." Reuse of carefully developed generics is
likely to produce better programs than hand coding.
VIEWAS and MKV guide the user by presenting only correct choices. When views are
written by hand, type checking usually catches errors. Although GLISP is not strongly
typed (because of its Lisp ancestry), there are many error checks that catch nearly all type
errors. Our experience with our system has been good, and we have reused generics for new
applications; e.g., the generic for distance from a line to a point was reused to test whether
a mouse position is close to a line.
Ultimately, it must be verified not only that software meets its specification but also
that it is what the user really wants. With rapid prototyping based on reuse, developers
(gldefun cvh-split (cp:cvhpoint)
(let (maxpt pts newcp)
(pts := (points cp))
((points cp) := nil)
(if pts is not null
then (maxpt := (for p in pts when ((p !? (p1 (line cp)))
and (p !? (p2 (line cp))))
(leftof-distance (line cp) p)))
(if maxpt and (leftof (line cp) maxpt)
then (newcp := (a (typeof cp) with
(splice-in (circular-linked-list cp)
(circular-linked-list newcp))
(for p in pts do
(if (leftof (line cp) p)
then ((points cp) +- p)
else (if (leftof (line newcp) p)
then ((points newcp) +- p))) )
(split cp)
Figure
can address a program's performance in practice and make modifications easily. Our system
allows significant representation changes to be accomplished easily by recompilation.
Formal verification might be applied to specialized generics. Gries and Prins [21] suggest
a stratified proof of a program obtained by transformation: if a generic is correct and an
implementation of its abstract type is correct, the transformed algorithm will be correct.
Morgan [42] extends these techniques for proofs of data refinements. Morris [43] provides
calculational laws for refinement of programs written in terms of abstract types such as bags
and sets. Related methods might be used for proofs of refinements with a system such as
ours; a library of proven lemmas about generic components would greatly simplify the task
of proving a software system correct.
7 Views and OOP
Views can be used to generate methods that allow concrete data to be used with OOP
software; this is useful for reuse of OOP software that uses runtime messages to interface
to diverse kinds of objects. The GLISP compiler can automatically compile and cache
specialized versions of methods based on the definitions given in a type; for example, a
method to compute the area of a pizza-as-circle can be generated automatically.
(city
(list (name symbol)
(latitude (units real degrees))
(population integer)
(longitude (units real degrees)))
views ((point city-as-point)) )
(city-as-point (z17 city)
prop
((x ((let (rad:(units real radians))
(rad := (longitude z17))
(y ((signum (latitude z17)) *
(log (tan (pi /
(abs (latitude z17))
supers (vector))
Figure
19: City and Mercator Projection
We have implemented direct-manipulation graphical editors for linked-list and array.
A display method and editor for a record can be made interactively using program DISPM,
which allows selection of properties to be displayed, display methods to be used, and
positions. Given a display method for a record, a generic for displaying and editing structured
data containing such records can be used on the concrete data. Figure 20 shows data
displayed by the generic linked-list editor. The user can move forward or backward in
the displayed list or excise a selected list element; the user can also zoom in on an element
to display it in more detail or to edit the contents. This technique allows a single generic
editor to handle all kinds of linked-list. The display omits detail such as contents of link
fields and shows the data in an easily understood form.
Figure
20: Linked List Display
8 Related Work
We review closely related work. It is always possible to say "an equivalent program could
be written in language x"; however, a system for software reuse must satisfy several criteria
simultaneously to be effective [34]. We claim that the system described here satisfies all of
these criteria:
1. It has wide applicability: many kinds of software can be expressed as reusable generics.
2. It is easy to use. The amount of user input and the learning required are small.
3. It produces efficient code in several languages.
4. It minimally constrains the representation of data. Generics can be specialized for use
with existing data and programs.
Brooks [10] contends that there are "no silver bullets" in software development. The
system described here is not a silver bullet, but it suggests that significant improvement in
software development is possible.
8.1 Software Reuse
Krueger [34] is an excellent survey of software reuse, with criteria for practical effectiveness.
Biggerstaff and Perlis [8] contains papers on theory and applications of reuse; artificial
intelligence approaches are described in [1], [39], and [60]. Mili [41] extensively surveys
software reuse, emphasizing technical challenges.
8.2 Software Components
The Programmer's Apprentice [61] was based on reuse of clich'es, somewhat analogous to
our generics. This project produced some good ideas but had limited success. KBEmacs,
a knowledge-based editor integrated with Emacs, helped the user transform clich'e code;
unfortunately, KBEmacs was rather slow, and the user had to read and understand the low-level
output code. We assume that the user will treat the outputs of our system as "black
boxes" and will not need to read or modify the code. Rich [59] describes a plan calculus
for representing program and data abstractions; overlays relate program and data plans,
analogous to our views.
Weide [73] proposed a software components industry based on formally specified and
unchangeable components. Because the components would be verified and unchangeable,
errors would be prevented; however, the rigidity of the components might make them harder
to reuse. Our approach adapts components to fit the application.
Zaremski and Wing [77] describe retrieval of reusable ML components based on signature
matching of functions and modules; related techniques could be used with our generics.
Batory [4] [5] [6] describes a data structure precompiler and construction of software
systems from layers of plug-compatible components with standardized interfaces. The use of
layers whose interfaces are carefully specified allows the developer to ensure that the layers
will interface correctly. We have focused on adapting interfaces so that generics can be reused
for independently designed data.
8.3 Languages with Generic Procedures
Ada, Modula-2 [28], and C++ [69] allow modules for parameterized abstract types such as
STACK[type]. Books of generic procedures [37] [44] contain some of the same procedures
that are provided with our system. In Ada and Modula-2, such collections have limited value
because such code is easy to write and is only a small part of most applications. The class,
template, and virtual function features of C++ allow reuse of generics; however, Stroustrup's
examples [69] show that the declarations required are complex and subtle. Our declarations
are also complex, but VIEWAS hides this complexity and guides the user in creating correct
views. The ideas in VIEWAS might be adapted for other languages.
8.4 Functional and Set Languages
ML [74] [55] is like a strongly typed Lisp; it includes polymorphic functions (e.g., functions
that operate on lists of an arbitrary type) and functors (functions that map structures
of types and functions to structures). ML also includes references (pointers) that allow
imperative programming. ML functors can instantiate generic modules such as container
types. Our system allows storing into a data structure through a view and composition of
views [52].
Miranda [71] is a strongly-typed functional language with higher-order functions. While
this allows generics, it is often hard to write functional programs with good performance.
provides sets and set operations. [63] describes an attempt to automatically
choose data structures in SETL to improve efficiency. Kruchten et al. [35] say "slow is
beautiful" to emphasize ease of constructing programs, but inefficient implementations can
make even small problems intractable.
Transformation Systems
Transformation systems repeatedly replace parts of an abstract algorithm specification with
code that is closer to an implementation, until executable code is reached. Our views specify
transformations from features of abstract types to their implementations.
Kant et al. [33] describe Sinapse, which generates programs to simulate spatial differential
equations, e.g. for seismic analysis. Sinapse transforms a small specification into a much
larger program in Fortran or C; it is written using Mathematica [75] and appears to work
well within its domain.
Setliff's Elf system [65] automatically generates data structures and algorithms for wire
routing on integrated circuits and printed circuit boards. Rules are used to select refinement
transformations based on characteristics of the routing task.
KIDS [68] transforms problem statements in first-order logic into programs that are highly
efficient for certain combinatorial problems. The user must select transformations to be used
and must supply a formal domain theory for the application. This system is interesting and
powerful, but its user must be mathematically sophisticated.
Gries and Prins [21] proposed use of syntactic transformations to specify implementation
of abstract algorithms. Volpano [72] and Gries [23] describe systems in which a user specifies
transformations for variables, expression patterns, and statement patterns; by performing
substitutions on an algorithm, a different version of the algorithm is obtained. This method
allows the user to specify detailed transformations for a particular specialization of an
algorithm, whereas we rely on type-based transformations and on general optimization
patterns. The ability to specify individual transformations in Gries' system gives more
flexibility, possibly at the cost of writing more patterns.
The Intentional Programming project at Microsoft [67] is based on intentions, which
are similar to algorithm fragments expressed as abstract syntax trees. Intentions can be
transformed by enzymes at the abstract syntax tree level and can be parsed and unparsed
into various surface syntaxes by methods stored with or inherited by the intentions. This
work is in progress; its results to date are impressive.
Berlin and Weise [7] used partial evaluation to improve efficiency of scientific programs.
Given that certain features of a problem are constant, their compiler performs as many
constant calculations as possible at compile time, yielding a specialized program that runs
faster. Our system includes partial evaluation by in-lining and symbolic optimization. Consel
and Danvy [12] survey work on partial evaluation.
8.6 Views
Goguen [18] proposes a library interconnection language, LIL. This proposal has much in
common with our approach, and Goguen uses the term view similarly; LIL has a stronger
focus on mathematical descriptions and axioms. The OBJ language family of Goguen et al.
[19] has views that are formal type mappings; operators are mapped by strict isomorphisms.
Tracz [70] describes LILEANNA, which implements LIL for construction of Ada packages;
views in LILEANNA map types, operations, and exceptions between theories. In our
system, views are computational transformations between types; general procedures as well
as operators can be reused.
Garlan [17] and Kaiser [31] use views to allow tools in a program development environment
to access a common set of data. Their MELD system can combine features (collections
of classes and methods) to allow "additive" construction of a system from selected features.
Meyers [40] discusses problems of consistency among program development tools and
surveys approaches including use of files, databases, and views as developed by Garlan.
Hailpern and Ossher [25] describe views as subsets of methods of a class, to restrict certain
methods to particular clients. Harrison and Ossher [26] argue that OOP is too restrictive for
applications that need their own views of objects; they propose subjects that are analogous
to class hierarchies.
8.7 Data Translation
IDL (Interface Description Language) [36] translates representations, possibly with structure
sharing, for exchange of data between parts of a compiler, based on precise data
specifications. Herlihy and Liskov [27] describe transmission of data over a network, with
representation translation and shared structure; the user writes procedures to encode and
decode data for transmission. The Common Object Request Broker Architecture (CORBA)
[13] includes an Interface Definition Language and can automatically generate stubs to allow
interoperability of objects across distributed systems and across languages and machine
architectures. The ARPA Knowledge-Sharing Project [45] addresses the problem of sharing
knowledge bases that were developed using different ontologies. Purtilo and Atlee [58]
describe a system that translates calling sequences by producing small interface modules that
reorder and translate parameters as necessary for the called procedure. Re-representation of
data allows reuse of an existing procedure; it requires space and execution time, although
[36] found this was a small cost in a compiler. [49] and this paper describe methods for data
translation, but these do not handle shared structure.
Guttag and Horning [24] describe a formal language for specifying procedure interface signatures
and properties. Yellin and Strom [76] describe semi-automatic synthesis of protocol
converters to allow interfacing of clients and servers.
8.8 Object-oriented Programming
We have described how views can be used to generate methods for OOP. In OOP, messages
are interpreted (converted to procedure calls) depending on the type of the receiving object;
methods can be inherited from superclasses. The close connection between a class and its
requires the user to understand a great deal about a class and its methods.
In many OOP systems, a class must include all data of its superclasses, so reuse with
OOP restricts implementation of data; names of data and messages must be consistent and
must not conflict. Holland [29] uses contracts to specify data and behavioral obligations of
composed objects. Contracts are somewhat like our clusters, but require that specializations
include certain instance data and implement data in the same way as the generics that are
to be specialized. A separate "contract lens" construct is used to disambiguate names where
there are conflicts. Our views provide encapsulation that prevents name conflicts; views
allow the reuse benefits of OOP with flexibility in implementing data.
Some OOP systems are inefficient: since most methods are small, message interpretation
overhead can be large, especially in layered systems. C++ [69] has restricted OOP
with efficient method dispatching. Opacity of objects prevents optimization across message
boundaries unless messages are compiled in-line; C++ allows in-line compilation. Reuse in
OOP may require creating a new object to reuse a method of its class; views allow an object
to be thought of as another type without having to materialize that type.
9 Conclusions
Our approach is based on reuse of programming knowledge: generic procedures, abstract
types, and view descriptions. We envision a library of abstract types and generics, developed
by experts, that could be adapted quickly for applications. Programmers of ordinary skill
should be able to reuse the generics. VIEWAS facilitates making views; easily used interfaces,
as opposed to verbose textual specifications with precise syntax, are essential for successful
reuse. Systems like VIEWAS might reduce the complexity of the specifications required in
other languages. Views also support data translation and runtime message interpretation: a
single direct-manipulation editor can handle all implementations of an abstract type.
These techniques provide high payoff in generated code relative to the size and complexity
of input specifications. They require only modest understanding of the details of library
procedures for successful reuse.
Our techniques allow restructuring of data to meet new requirements or to improve
efficiency. Traditional languages reflect the data implementation in the code [3], making
changes costly. Our system derives code from the data definitions; design decisions are
stated in a single place and distributed by compilation rather than by hand coding.
The ability to produce code in different languages decouples the choice of programming
tools from the choice of application language. It allows new tools to extend old systems or
to write parts of a system without committing to use of the tool for everything. Just as
computation has become a commodity, so that the user no longer cares what kind of CPU
chip is inside the box, we may look forward to a time when today's high-level languages
become implementation details.
Acknowledgements
Computer equipment used in this research was furnished by Hewlett Packard and IBM.
We thank David Gries, Hamilton Richards, Ben Kuipers, and anonymous reviewers for
their suggestions for improving this paper.
--R
IEEE Trans.
Functors: The Categorical Imperative
"A 15 Year Perspective on Automatic Programming,"
"The Design and Implementation of Hierarchical Software Systems with Reusable Components,"
"Scalable Software Libraries,"
"Reengineering a Complex Application Using a Scalable Data Structure Compiler,"
"Compiling Scientific Code Using Partial Eval- uation,"
Software Reusability (2 vols.
Software Components with Ada
"No Silver Bullet: Essence and Accidents of Software Engineering,"
An Introduction to Data Types
"Tutorial Notes on Partial Evaluation,"
"The Common Object Request Broker: Architecture and Specification,"
The Programming Language
A Discipline of Programming
Design Patterns: Elements of Reusable Object-Oriented Software
"Views for Tools in Integrated Environments,"
"Reusing and Interconnecting Software Components,"
"Principles of Parameterized Programming,"
The Denotational Description of Programming Languages
"A New Notion of Encapsulation,"
"The Transform - a New Language Construct,"
"Introduction to LCL, a LARCH/C Interface Language,"
"Extending Objects to Support Multiple Interfaces and Access Control,"
"Subject-Oriented Programming Critique of Pure Objects),"
"A Value Transmission Method for Abstract Data Types,"
Data Abstraction and Program Development using Modula-2
"Specifying Reusable Components Using Contracts,"
Xerox Palo Alto Research Center
"Synthesizing Programming Environments from Reusable Features,"
"Understanding and Automating Algorithm Design,"
"Scientific Programming by Automated Synthesis,"
"Software Reuse,"
"Software Prototyping using the SETL Language,"
"IDL: Sharing Intermediate Representations,"
The Modula-2 Software Component Library
Automating Software Design
"Difficulties in Integrating Multiview Development Systems,"
"Reusing Software: Issues and Research Directions,"
"Data Refinement by Miracles,"
"Laws of Data Refinement,"
The Ada Generic Library
"Enabling Technology for Knowledge Sharing,"
"GLISP: A LISP-Based Programming System With Data Abstraction,"
"Data Abstraction in GLISP,"
"Negotiated Interfaces for Software Reuse,"
"Software Reuse through View Type Clusters,"
"Generating Programs from Connections of Physical Models,"
"Composing Reusable Software Components through Views,"
"Conversion of Units of Measurement,"
"Creation of Views for Reuse of Software with Different Data Representations,"
ML for the Working Programmer
Introduction to Discrete Structures
Computational Geometry
"Module Reuse by Interface Adaptation,"
"A Formal Representation for Plans in the Programmer's Apprentice,"
Readings in Artificial Intelligence and Software Engineering
The Programmer's Apprentice
A Mathematical Theory of Global Program Optimization
"An Automatic Technique for Selection of Data Representations in SETL Programs,"
"On the Automatic Selection of Data Structures and Algorithms,"
"Abstraction and Verification in Alphard: Defining and Specifying Iterators and Generators,"
"Intentional Programming - Innovation in the Legacy Age,"
"KIDS: A Semiautomatic Program Development System,"
"LILEANNA: A parameterized programming language,"
"An Overview of Miranda,"
"The Templates Approach to Software Reuse,"
"Reusable Software Components,"
Mathematica: a System for Doing Mathematics by Computer
"Interfaces, Protocols, and the Semi-Automatic Construction of Software Adaptors,"
"Signature Matching: A Key to Reuse,"
--TR
--CTR
Heinz Pozewaunig , Dominik Rauner-Reithmayer, Support of semantics recovery during code scavenging using repository classification, Proceedings of the 1999 symposium on Software reusability, p.65-72, May 21-23, 1999, Los Angeles, California, United States
Hai Zhuge, Component-based workflow systems development, Decision Support Systems, v.35 n.4, p.517-536, July
Sanjay Bhansali , Tim J. Hoar, Automated Software Synthesis: An Application in Mechanical CAD, IEEE Transactions on Software Engineering, v.24 n.10, p.848-862, October 1998
Don Batory , Gang Chen , Eric Robertson , Tao Wang, Design Wizards and Visual Programming Environments for GenVoca Generators, IEEE Transactions on Software Engineering, v.26 n.5, p.441-452, May 2000
Fabio Casati , Silvana Castano , Mariagrazia Fugini , Isabelle Mirbel , Barbara Pernici, Using Patterns to Design Rules in Workflows, IEEE Transactions on Software Engineering, v.26 n.8, p.760-785, August 2000
Richard W. Selby, Enabling Reuse-Based Software Development of Large-Scale Systems, IEEE Transactions on Software Engineering, v.31 n.6, p.495-510, June 2005 | abstract data type;software reuse;direct-manipulation editor;generic procedure;partial evaluation;generic algorithm;algorithm specialization |
264211 | Trading conflict and capacity aliasing in conditional branch predictors. | As modern microprocessors employ deeper pipelines and issue multiple instructions per cycle, they are becoming increasingly dependent on accurate branch prediction. Because hardware resources for branch-predictor tables are invariably limited, it is not possible to hold all relevant branch history for all active branches at the same time, especially for large workloads consisting of multiple processes and operating-system code. The problem that results, commonly referred to as aliasing in the branch-predictor tables, is in many ways similar to the misses that occur in finite-sized hardware caches.In this paper we propose a new classification for branch aliasing based on the three-Cs model for caches, and show that conflict aliasing is a significant source of mispredictions. Unfortunately, the obvious method for removing conflicts --- adding tags and associativity to the predictor tables --- is not a cost-effective solution.To address this problem, we propose the skewed branch predictor, a multi-bank, tag-less branch predictor, designed specifically to reduce the impact of conflict aliasing. Through both analytical and simulation models, we show that the skewed branch predictor removes a substantial portion of conflict aliasing by introducing redundancy to the branch-predictor tables. Although this redundancy increases capacity aliasing compared to a standard one-bank structure of comparable size, our simulations show that the reduction in conflict aliasing overcomes this effect to yield a gain in prediction accuracy. Alternatively, we show that a skewed organization can achieve the same prediction accuracy as a standard one-bank organization but with half the storage requirements. | to the branch-predictor tables. Although this redundancy
increases capacity aliasing compared to a standard one-bank structure
of comparable size, our simulations show that the reduction in
conflict aliasing overcomes this effect to yield a gain in prediction
accuracy. Alternatively, we show that a skewed organization can
achieve the same prediction accuracy as a standard one-bank organization
but with half the storage requirements.
Keywords
Branch prediction, aliasing, 3 C's classification, skewed branch
predictor.
Now with Intel Microcomputer Research Lab, Oregon
c
1997 by the Association for Computing Machinery, Inc. Permission
to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made
or distributed for profit or commercial advantage and that new copies
bear this notice and the full citation on the first page. Copyrights for
components of this work owned by others than ACM must be honored.
Abstracting with credit is permitted.
To copy otherwise, to republish, to post on servers or to redistribute
to lists, requires prior specific permission and/or a fee. Request Permissions
from Publications Dept, ACM Inc., Fax +1 (212) 869-0481, or
permissions@acm.org.
1 Introduction and Related Work
In processors that speculatively fetch and issue multiple instructions
per cycle to deep pipelines, dozens of instructions might be in
flight before a branch is resolved. Under these conditions, a mispredicted
branch can result in substantial amounts of wasted work
and become a bottleneck to exploiting instruction-level parallelism.
Accurate branch prediction has come to play an important role in
removing this bottleneck.
Many dynamic branch prediction schemes have been investigated
in the past few years, with each offering certain distinctive
features. Most of them, however, share a common characteristic:
they rely on a collection of 1- or 2-bit counters held in a predictor
table. Each entry in the table records the recent outcomes of a given
branch substream [21], and is used to predict the direction of future
branches in that substream. A branch substream might be defined
by some bits of the branch address, by a bit pattern representing
previous branch directions (known as a branch history), by some
combination of branch address and branch history, or by bits from
target addresses of previous branches [14, 7, 18, 10, 8, 9].
Ideally, we would like to have a predictor table with infinite capacity
so that every unique branch substream defined by an (ad-
dress, history) pair will have a dedicated predictor. Chen et al.
have shown that two-level predictors are close to being optimal,
provided unlimited resources for implementing the predictors [3].
Real-world constraints, of course, do not permit this. Chip die-area
budgets and access-time constraints limit predictor-table size, and
most tables proposed in the literature are further constrained in that
they are direct-mapped and without tags.
Fixed-sized predictor tables lead to a phenomenon known as
aliasing or interference [21, 16], in which multiple (address, his-
tory) pairs share the same entry in the predictor table, causing
the predictions for two or more branch substreams to intermingle.
Aliasing has been classified as either destructive (i.e., a misprediction
occurs due to sharing of a predictor-table entry), harmless (i.e.,
it has no effect on the prediction) or constructive (i.e., aliasing occasionally
provides a good prediction, which would have been wrong
otherwise) [21]. Young et al. have shown that constructive aliasing
is much less likely than destructive aliasing [21].
Recent studies have shown that large or multi-process workloads
with a strong OS component exhibit very high degrees of aliasing
[11, 5], and require much larger predictor tables than previously
thought necessary to achieve a level of accuracy close to an ideal,
unaliased predictor table [11]. We therefore expect that new techniques
for removing conflict aliasing could provide important gains
towards increased branch-prediction accuracy.
Branch aliasing in fixed-size, direct-mapped predictor tables is
in many ways analogous to instruction-cache or data-cache misses.
This suggests an alternative classification for branch aliasing based
on the three-Cs model of cache performance first proposed by Hill
[6]. As with cache misses, aliasing can be classified as compul-
sory, capacity or conflict aliasing. Similarly, as with caches, larger
predictor tables reduce capacity aliasing, while associativity in a
predictor table could remove conflict aliasing.
Unfortunately, a simple-minded adaptation of cache associativity
would require the addition of costly tags, substantially increasing
the cost of a predictor table. In this paper we examine an alternative
approach, called skewed branch prediction, which borrows
ideas from skewed-associative caches [12]. A skewed branch
predictor is constructed from an odd number (typically 3 or 5) of
predictor-table banks, each of which functions like a standard tagless
predictor table. When performing a prediction, each bank is
accessed in parallel but with a different indexing function, and a
majority vote between the resulting lookups is used to predict the
direction of the branch.
In the next section we explain in greater detail our aliasing clas-
sification. In section 3, we quantify aliasing and assess the effect of
conflict aliasing on overall branch-prediction accuracy. In section 4,
we introduce the skewed branch predictor, a hardware structure designed
specifically to reduce conflict aliasing. In section 5, we show
how and why the skewed branch predictor removes conflict aliasing
effects at the cost of some redundancy. Our analysis includes both
simulation and analytical models of performance, and considers a
range of possible skewed predictor configurations driven by traces
from the instruction-benchmark suite (IBS) [17], which includes
complete user and operating-system activity. Section 6 proposes
the enhanced skewed branch predictor, a slight modification to the
skewed branch predictor, which enables more attractive tradeoffs
between capacity and conflict aliasing. Section 7 concludes this
study and proposes some future research directions.
An Aliasing Classification
Throughout this paper, we will focus on global-history prediction
schemes for the sake of conciseness. Global-history schemes
use both the branch address and a pattern of global history bits,
as described in [18, 19, 20, 10, 8]. Previously-proposed global-
history predictors are all direct-mapped and tag-less. Given a history
length, the distinguishing feature of these predictors is the
hashing function that is used to map the set of all (address, history)
pairs onto the predictor table.
The gshare and gselect schemes [8] have been the most studied
global schemes (gselect corresponds to GAs in Yeh and Patt's terminology
[18, 19, 20]). In gshare, the low-order address bits and
global history bits are XORed together to form an index value 1 ,
whereas in gselect, low-order address bits and global history bits
are concatenated.
Aliasing occurs in direct-mapped tag-less predictors when two
or more (address, history) pairs map to the same entry. To measure
aliasing for a particular global scheme and table, we simulate
a structure having the same number of entries and using the same
indexing function as the predictor table considered. However, instead
of storing 1-bit or 2-bit predictors in the structure, we store
the identity of the last (address, history) pair that accessed the en-
try. Aliasing occurs when the indexing (address, history) pair is
different from the stored pair. The aliasing ratio is the ratio between
the number of aliasing occurrences and the number of dynamic
conditional branches. When measured in this way, we can
see the relationship between branch aliasing and cache misses. Our
simulated tagged table is like a cache with a line size of one datum,
and an aliasing occurrence corresponds to a cache miss.
1 When the number of history bits is less than the number of index bits, the history
bits are XORed with the higher-order end of the section of low-order address bits, as
explained in [8]
benchmark conditional branch count
dynamic static
groff 11568181 5634
gs 14288742 10935
mpeg play 8109029 4752
real gcc 13940672 16716
verilog 5692823 3918
Table
1: Conditional branch counts
A widely-accepted classification of cache misses is the three-Cs
model, first introduced by Hill [6] and later refined by Sugumar and
Abraham [15]. The three-Cs model divides cache misses into three
groups, depending on their causes.
ffl Compulsory misses occur when an address is referenced for
the first time. These unavoidable misses are required to fill an
empty or "cold" cache.
ffl Capacity misses occur when the cache is not large enough to
retain all the addresses that will be re-referenced in the future.
Capacity misses can be reduced by increasing the total size of
the cache.
ffl Conflict misses occur when two memory locations contend
for the same cache line in a given window of time. Conflict
misses can be reduced by increasing the associativity of
a cache, or improving the replacement algorithm.
Aliasing in branch-predictor tables can be classified in a similar
fashion:
ffl Compulsory aliasing occurs when a branch substream is encountered
for the first time.
ffl Capacity aliasing, like capacity cache misses, is due to a pro-
gram's working set being too large to fit in a predictor table,
and can be reduced by increasing the size of the predictor table
ffl Conflict aliasing occurs when two concurrently-active branch
substreams map to the same predictor-table entry. Methods
for reducing this component of aliasing have not yet, to our
knowledge, appeared in the published literature.
Quantifying Aliasing
3.1 Experimental Setup
We conducted all of our trace-driven simulations using the IBS-
Ultrix benchmarks [17]. These benchmarks were traced using a
hardware monitor connected to a MIPS-based DECstation running
Ultrix 3.1. The resulting traces include activity from all user-level
processes as well as the operating-system kernel, and have been determined
by other researchers to be a good test of branch-prediction
performance [5, 11]. Conditional branch counts 2 derived from
these traces are given in Table 1.
Although we simulated the sdet and video play benchmarks,
they exhibited no special behavior compared with the other bench-
marks. We therefore omit sdet and video play results from this paper
in the interest of saving space.
beq r0,r0 is used as an unconditional relative jump by the MIPS compiler, therefore
we did not consider it as conditional. This explains the discrepancy with the
branch counts reported in [5, 11]
4-bit history
benchmark substream compulsory misprediction
ratio aliasing 1-bit 2-bit
gs 1.91 0.15 % 7.03 % 5.28 %
mpeg play 1.83 0.11 % 9.08 % 7.24 %
real gcc 2.36 0.28 % 9.38 % 7.16 %
verilog 1.96 0.13 % 6.48 % 4.57 %
12-bit history
benchmark substream compulsory misprediction
ratio aliasing 1-bit 2-bit
groff 7.14 0.35 % 3.63 % 2.56 %
gs 7.95 0.61 % 3.71 % 2.77 %
mpeg play 6.27 0.37 % 5.85 % 4.52 %
real gcc 12.90 1.55 % 4.90 % 3.93 %
verilog 9.24 0.64 % 3.74 % 2.66 %
Table
2: Unaliased predictor
We first simulated an ideal unaliased scheme (i.e., a predictor
table of infinite size). The misprediction ratios that we obtained
are shown in Table 2 for history lengths of 4 and 12 bits, and for
both 1-bit and 2-bit predictors (we include unconditional branches
as part of the global-history bits). When an (address, history) pair is
encountered for the first time, we do not count it as a misprediction,
so compulsory miss contribution to mispredictions is not reported
in the last two columns of Table 2.
The 2-bit saturating counter gives better prediction accuracy in
an unaliased predictor table than the 1-bit predictor. Our intuition
is that this difference is due mainly to loop branches. We also measured
the substream ratio, which we define as the average number
of different history values encountered for a given conditional
branch address (see first column of Table 2).
The compulsory-aliasing percentage was computed from the
number of different (address, history) pairs referenced through-out
the trace divided by the total number of dynamic conditional
branches. From Table 2, we observe that compulsory aliasing, with
a 12-bit history length, generally constitutes less than 1% of the
total of all dynamic conditional branches, except in the case of
real gcc, which exhibits a compulsory-aliasing rate of 1.55%.
3.2 Quantifying Conflict and Capacity Aliasing
To quantify conflict and capacity aliasing, we simulated tagged
predictor tables holding (address, history) pairs. Figures 1 and 2
show the miss ratio in direct-mapped (DM) and fully-associative
tables using 4 bits and 12 bits of global history, respec-
tively. The two direct-mapped tables are indexed with a gshare-
and a gselect-like function. The fully-associative table uses a least-
recently-used (LRU) replacement policy.
The miss ratio for the fully-associative table gives the sum of
compulsory and capacity aliasing. The difference between gshare
or gselect and the fully-associative table gives the amount of conflict
aliasing in the corresponding gshare and gselect predictors. It
should be noted that LRU is not an optimal replacement policy [15].
However, because it bases its decisions solely on past information,
the LRU policy gives a reasonable base value of the amount of conflict
aliasing that can be removed by a hardware-only scheme.
It appears that for our benchmarks, gselect has a higher aliasing
rate than gshare. This explains why, for a given table size and history
length, gshare has a lower misprediction rate than gselect, as
claimed in [8]. This difference is very pronounced with 12 bits of
global history, because in this case, gselect uses only a very small
number of address bits (e.g., only 4 address bits for a 64K-entry
table).
Figure
1 shows that when the number of entries is larger than
or equal to 4K, capacity aliasing nearly vanishes, leaving conflicts
as the overwhelming cause of aliasing. The same condition holds
in
Figure
2 for table sizes greater than about 16K. This leads us
to conclude that some amount of associativity in branch prediction
tables is needed to limit the impact of aliasing.
3.3 Problems with Associative Predictor Tables
Associativity in caches introduces a degree of freedom for
avoiding conflicts. In a direct-mapped cache, tag bits are used to determine
whether a reference hits or misses. In an associative cache,
the tag bits also determine the precise location of the requested data
in the cache.
Because of its speculative nature, a direct-mapped branch prediction
table can be tag-less. To implement associativity, however,
we must introduce tags identifying (address, history) pairs. Un-
fortunately, the tag width is disproportionately large compared to
the width of the individual predictors, which are usually 1 or 2 bits
wide.
Another method for achieving the benefits of associativity, without
having to pay the cost of tags is needed. The skewed branch
predictor, described in the next section, is one such method.
4 The Skewed Branch Predictor
We have previously noted that the behaviors of gselect and
gshare are different even though these two schemes are based on the
same (address, history) information. This is illustrated on Figure 3
where we represent a gshare and a gselect table with 16 entries. In
this example, there is a conflict both with gshare and gselect, but
the (address, history) pairs that conflict are not the same. We can
conclude that the precise occurrence of conflicts is strongly related
to the mapping function. The skewed branch predictor is based on
this observation.
The basic principle of the skewed branch predictor is to use several
branch-predictor banks (3 banks in the example illustrated in
Figure
4), but to index them by different and independent hashing
functions computed from the same vector V of information (e.g.,
branch address and global history). A prediction is read from each
of the banks and a majority vote is used to select a final branch
direction.
The rationale for using different hashing functions for each bank
is that two vectors, V and W, that are aliased with each other in one
bank are unlikely to be aliased in the other banks. A destructive
aliasing of V by W may occur in one bank, but the overall prediction
on V is likely to be correct if V does not suffer from destructive
aliasing in the other banks.
4.1 Execution Model
We consider two policies for updating the predictors across multiple
banks:
ffl A total update policy: each of the three banks is updated as
if it were a sole bank in a traditional prediction scheme.
ffl A partial update policy: when a bank gives a bad prediction,
it is not updated when the overall prediction is good. This
groff gs mpeg play26101418512 1k 2k 4k 8k 16k 32k 64k
number of entries
DM gselect
DM gshare
FA LRU51525
number of entries
DM gselect
DM gshare
number of entries
DM gselect
DM gshare
FA LRU
nroff real gcc verilog261014
number of entries
DM gselect
DM gshare
FA LRU51525
number of entries
DM gselect
DM gshare
FA LRU51525
number of entries
DM gselect
DM gshare
FA LRU
Figure
1: Miss percentages in tables tagged with (address, history) pairs (4-bit history)
groff gs mpeg play515254k 8k 16k 32k 64k 128k 256k 512k
number of entries
DM gselect
DM gshare
FA LRU515254k 8k 16k 32k 64k 128k 256k 512k
number of entries
DM gselect
DM gshare
FA LRU51525
number of entries
DM gselect
DM gshare
FA LRU
nroff real gcc verilog51525
number of entries
DM gselect
DM gshare
FA LRU5152535
number of entries
DM gselect
DM gshare
FA LRU515254k 8k 16k 32k 64k 128k 256k 512k
number of entries
DM gselect
DM gshare
FA LRU
Figure
2: Miss percentages in tables tagged with (address, history) pairs (12-bit history)
gselect
history
Address
Figure
3: Conflicts depend on the mapping function
history
address
vote
majority
Figure
4: A Skewed Branch Predictor
wrong predictor is considered to be attached to another (ad-
dress, history) pair. When the overall prediction is wrong, all
banks are updated as dictated by the outcome of the branch.
4.2 Design Space
Chosing the information (branch address, history, etc.) that is
used to divide branches into substreams is an open problem. The
purpose of this section is not to discuss the relevance of using some
combination of information or the other, but to show that most conflict
aliasing effects can be removed by using a skewed predictor
organization. For the remainder of this paper, the vector of information
that will be used for recording branch-prediction information
is the concatenation of the branch address and the k bits of global
be the set of all V 's.
The functions f0 , f1 and f2 used for indexing the three 2 n -entry
banks in the experiments are the same as those proposed for the
skewed-associative cache in [13]. Consider the decomposition of
the binary representation of vector V in bit substrings (V3 ,V2 ,V1 ),
such that V1 and V2 are two n-bit strings. Now consider the function
H defined as follows:
where \Phi is the XOR (exclusive or) operation. We can now define
three different mapping functions as follows:
Further information about these functions can be found in [13].
The most interesting property of these functions is that if two
distinct vectors (V 3; V 2; V 1) and (W3; W2;W 1) map to the
same entry in a bank, they will not conflict in the other banks if
1). Any other function family exhibiting the
same property might be used.
Having defined an implementation of the skewed branch predic-
tor, we are now in a position to evaluate it and check its behavior
against conventional global-history schemes.
For the purposes of comparison, we will use the gshare global
scheme for referencing the standard single-bank organization. The
skewed branch predictor described earlier will also be referred to as
gskewed for the remainder of this paper.
5 Analysis
5.1 Simulation Results
The aim of this section is to evaluate the cost-effectiveness
of the skewed branch predictor via simulation. In the skewed
branch predictor, a prediction associated with a (branch, history)
pair is recorded up to three times. It is intuitive that the impact
of conflict aliasing is lower in a skewed branch predictor than in
a direct-mapped gshare table. However, if the same total number
of predictor storage bits is allocated to each scheme, it is not clear
that gskewed will yield better results - the redundancy that makes
gskewed work also has the effect of increasing the degree of capacity
aliasing among a fixed set of predictor entries. Said differently,
it may be better to simply build a one-bank predictor table 3 times
as large, rather than a 3-bank skewed table.
number of entries
gshare
number of entries
gshare
gskewed7.47.88.28.699.4
number of entries
gshare
gskewed
nroff real gcc verilog3.844.24.44.62k 4k 8k 16k 32k 64k
number of entries
gshare
number of entries
gshare
number of entries
gshare
gskewed
Figure
5: Misprediction percentage with 4-bit history
groff gs mpeg play3458k 16k 32k 64k 128k 256k
number of entries
gshare
number of entries
gshare
number of entries
gshare
gskewed
nroff real gcc verilog2.53.54.58k 16k 32k 64k 128k 256k
number of entries
gshare
number of entries
gshare
number of entries
gshare
gskewed
Figure
Misprediction percentage with 12-bit history
history length
gshare 16k
gskewed
history length
gshare 16k
gskewed 3x4k5.56.57.58.5
history length
gshare 16k
gskewed 3x4k
nroff real gcc verilog2.633.43.84.2
history length
gshare 16k
gskewed
history length
gshare 16k
gskewed
history length
gshare 16k
gskewed 3x4k
Figure
7: Misprediction percentage of 3x4k-gskewed vs. 16k-gshare
For the direct comparison between gshare and gskewed, we used
2-bit saturating counters and a partial update policy for gskewed.
Varying prediction table size The results for a history size of 4
bits and 12 bits are plotted in Figures 5 and 6, respectively, for a
large spectrum of table sizes.
The interesting region of these graphs is where capacity aliasing
for gshare has vanished. In this region,
a skewed branch predictor with a partial update policy
achieves the same prediction accuracy as a 1-bank pre-
dictor, but requires approximately half the storage resources
For all benchmarks and for a wide spectrum of predictor sizes,
the skewed branch predictor consistently gives better prediction accuracy
than the 1-bank predictor. It should be noted that when using
the skewed branch predictor and a history length of 4 (12), there is
very little benefit in using more than 3x4k (3x16k) entries, while
increasing the number of entries to 64k (256k) on gshare still improves
the prediction accuracy.
Notice that the skewed branch predictor is more able to remove
pathological cases. This appears clearly on Figure 6 for nroff.
Varying history length For any given prediction table size, some
history length is better than others. Figure 7 illustrates the miss rates
of a 3x4k-entry gskewed vs. a 16k-entry gshare when varying the
history length. The plots show that despite using 25 % less storage
resources, gskewed outperforms gshare on all benchmarks except
real gcc.
Varying number of predictor banks We also considered skewed
configurations with five predictor banks. Our simulations results
(not reported here) showed that there is very little benefit to increasing
the number of banks to five; it appears that a 3-bank skewed
branch predictor removes the most significant part of conflict alias-
ing, and a more cost-effective use of resources would be to increase
the size of the banks rather than to increase their number.
Update policy To verify that gskewed is effective in removing
conflict aliasing, we compare a 3\LambdaN-entry gskewed branch predictor
with a fully-associative N-entry LRU table. Figure 8 illustrates
this experiment for a global history length of 4 bits and 2-bit saturating
counters. For (address, history) pairs missing in the fully-associative
table, a static prediction always taken was assumed. For
gskewed, both partial-update and total-update policies are shown.
It appears that a 3*N-entry gskewed table with partial update
delivers slightly better behavior than the N-entry fully-associative
table, but when it uses total-update policy, it exhibits slightly worse
behavior. We conclude that a 3xN-entry gskewed predictor with
partial update delivers approximately the same performance as an
N-entry fully-associative LRU predictor.
The reason why partial update is better than total update is in-
tuitive. For partial update, when 2 banks give a good prediction and
the third bank gives a bad prediction, we do not update the third
bank. By not updating the third bank, we enable it to contribute
to the correct prediction of a different substream and effectively increase
the capacity of the predictor table as a whole.
5.2 Analytical Model
Although our simulation results show that a skewed predictor
table offers an attractive alternative to the standard one-bank predictor
structure, they do not provide much explanation as to why
a skewed organization works. In this section, we present an analytical
model that helps to better understand why the technique is
effective.
To make our analytical modeling tractable, we make some simplifying
assumptions: we assume 1-bit automatons and the total
update policy. We begin by defining the table aliasing probabil-
ity. Consider a hashing function F which maps (address, history)
FA LRU
gskewed TU
gskewed PU681012
FA LRU
gskewed TU
gskewed PU7.58.59.510.5
FA LRU
gskewed TU
gskewed PU
nroff real gcc verilog3.84.24.65512 1k 2k 4k 8k 16k 32k
FA LRU
gskewed TU
gskewed PU8101214
FA LRU
gskewed TU
gskewed PU56789
FA LRU
gskewed TU
gskewed PU
Figure
8: Misprediction percentage of 3N-entry gskewed vs. N-entry fully-associative LRU
pairs onto a N-entry table. The aliasing probability for a dynamic
reference (address, history) is defined as follows:
Let D be the last-use distance of V, i.e. the number of distinct
(address, history) pairs that have been encountered since the last
occurrence of V. Assuming F distributes these D vectors equally in
the table (i.e., assuming F is a good hashing function), the aliasing
probability for dynamic reference V is
When N is much greater than 1, we get a good approximation with
N (2)
The aliasing probability is a function of the ratio between the
last-use distance and the number of entries.
aliasing probability, and b be the
probability that an (address, history) pair is biased taken. With 1-
bit predictors, when an entry is aliased, the probability that the prediction
given by that entry differs from the unaliased prediction is
It should be noted that the aliasing is less likely to be
destructive if b is close to 0 or 1 than if b is close to 1=2.
Assuming a total update policy, and because we use different
hashing functions for indexing the three banks, the events in a bank
are not correlated with the events in an other bank. Now consider a
particular dynamic reference V. Four cases can occur:
1. With probability not aliased in any of the three
banks: the prediction will be the same as the unaliased prediction
2. With probability aliased in one bank, but not
in the other two banks: the resulting majority vote will be in
the same direction as the unaliased prediction.
3. With probability 3p is aliased in two banks, but
not in the remaining one.
With probability predictions for both
aliased banks are different from the unaliased prediction: the
overall prediction is different from the unaliased prediction.
4. With probability p 3 , V is aliased in all three banks.
With probability
the predictions are different from the unaliased prediction
in at least two prediction banks: the skewed prediction is
different from the unaliased prediction.
In summary, the probability that a prediction in our 3-bank skewed
predictor differs from the unaliased prediction is :
In contrast, the formula for a direct-mapped 1-bank predictor table
is:
Pdm and Psk are plotted in Figure 9 for the worst case
We have
Psk =4
Pdm =2
The main characteristic of the skewed branch predictor is that
its mispredict probability is a polynomial function of the aliasing
probability. The most relevant region of the curve is where the per-
bank aliasing probability, p, is low magnifies the curve
for small aliasing probabilities).
At comparable storage resources, a 3-bank scheme has a greater
per-bank aliasing probability than a 1-bank scheme, because each
bank has a smaller number of entries. By taking into account formula
(1), we find that for a 3x(N/3)-entry gskewed, Psk is lower
than Pdm in a N-entry direct-mapped table when the last-use distance
D is less than approximately N, while for D ? N, Psk
exceeds Pdm .
mispredict
overhead
per-bank aliasing probability
3 banks
Figure
9: destructive aliasing0.5 %
mispredict
overhead
per-bank aliasing probability
3 banks
Figure
10: destructive aliasing
This highlights the tradeoff that takes place in the skewed branch
predictor: a gain on short last-use distance references is traded for
a loss on long last-use distance references. Now consider a N-entry
fully-associative LRU table. When the last-use distance D is less
than N, there is a hit, otherwise there is a miss. Hence, in a predictor
table, aliasing for short last-use distance references is conflict
aliasing, and aliasing for long last-use distance references is capacity
aliasing.
In other words, the skewed branch predictor trades conflict
aliasing for capacity aliasing.
To verify if our mathematical model is meaningful, we extrapolated
the misprediction rate for gskewed by measuring D for each
dynamic (address, history) pair and applied formulas (1) and (3).
When an (address, history) pair was encountered for the first time,
we applied formula (3) with 1. The bias probability b was
evaluated for the entire trace by measuring the density of static (ad-
dress, history) pairs with bias taken, and the value found was then
fed back to the simulator when applying formula (3) on the same
trace. Finally, we added the unaliased misprediction rate of table 2
(the contribution of compulsory aliasing to mispredictions appears
only in the mispredict overhead).
The results are shown in Figure 11 for a history length of 4. It
should be noted that our model always slightly overestimates the
misprediction rate. This can be explained by the constructive aliasing
phenomenon that is reported in [21].
As noted above, we made some simplifying assumptions when
we devised our analytical model. The difficulty with extending the
model to a partial-update policy is that occurrences of aliasing in
a bank depend on what happens in the other banks. Modeling the
effect of using 2-bit automatons is also difficult because a 2-bit automaton
by itself removes some part of aliasing effects on prediction
Despite the limitations of the model, it effectively explains why
skewed branch prediction works: in a standard one-bank table, the
mispredict overhead increases linearly with the aliasing probability,
but in an M-bank skewed organization, it increases as an M-th degree
polynomial. Because we deal with per-bank aliasing probabili-
ties, which range from 0 to 1, a polynomial growth rate is preferable
to a linear one.
6 An Enhanced Skewed Branch Predictor
Using a short history vector limits the number of (address, his-
tory) pairs (see the substream ratio column of Table 2) and therefore
the amount of capacity aliasing. On the other hand, using a
long history length leads to better intrinsic prediction accuracy on
unaliased predictors, but results in a large number of (address, his-
tory) pairs. Ideally, given a fixed transistor budget, one would like
to benefit from the better intrinsic prediction accuracy associated
with a long history, and from the lower aliasing rate associated with
a short history. Selecting a good history length is essentially a trade-off
between the accuracy of the unaliased predictor and the aliasing
probability.
While the effect of conflict aliasing on the skewed branch predictor
has been shown to be negligible, capacity aliasing remains
a major issue. In this section we propose an enhancement to the
skewed branch predictor that removes a portion of the capacity-
aliasing effects without suffering from increased conflict aliasing.
In the enhanced skewed branch predictor, the complete information
vector (i.e., branch history and address) is used with the
hashing functions f1 and f2 for indexing bank 1 and bank 2, as
in the previous gskewed scheme. But for function f0 , which indexes
bank 0, we use the usual bit truncation of the branch address
(address mod 2 n ).
The rationale for this modification is as follows:
Consider an enhanced gskewed and gskewed using the same history
length L, and (address, history) pair (A; H).
has the same last-use distance DL on the three banks
of gskewed and on banks 1 and 2 of enhanced gskewed. But for
enhanced gskewed, only the address is used for indexing bank 0, so
the last-use distance DS of the address A on bank 0 is shorter than
DL .
Two situations can occur:
1. When DL is small compared with the bank size, the aliasing
probability on a bank in either gskewed or enhanced gskewed
is small, and both gskewed and enhanced gskewed are likely
to deliver the same prediction as the unaliased predictor for
history length L, because these predictions will be present in
at least two banks.
2. When DL becomes large compared with the bank size, the
aliasing probability pL on a any bank of gskewed or banks 1
and 2 of enhanced gskewed becomes close to 1 (formula (2)
in the previous section).
For both designs, when predictions on banks 1 and 2 differ,
the overall prediction is equal to the prediction on bank 0.
Now, since DS ! DL , the aliasing probability pS on bank 0
of enhanced gskewed is lower than the aliasing probability
pL on bank 0 of gskewed. When DL is too high, the better
intrinsic prediction accuracy associated with the long history
on bank 0 in gskewed cannot compensate for the increased
aliasing probability in bank 0.
Our intuition is that when the history length is short, the first
situation will dominate and both predictors will deliver equivalent
entries/table
extrapol. gskewed
meas. gskewed8101214
entries/table
extrapol. gskewed
meas. gskewed9.510.511.512.513.5
entries/table
extrapol. gskewed
meas. gskewed
nroff real gcc verilog5.25.666.46.8512 1k 2k 4k 8k 16k 32k
entries/table
extrapol. gskewed
meas. gskewed1012141618
entries/table
extrapol. gskewed
meas. gskewed791113
entries/table
extrapol. gskewed
meas. gskewed
Figure
Extrapolated vs. measured misprediction percentage
prediction, but for a longer history length, the second situation will
occur more often and enhanced gskewed will deliver better overall
prediction than gskewed.
Simulation results: Figure 12 plots the results of simulations
that vary the history length for a 3x4K-entry enhanced gskewed,
a 3x4K-entry gskewed and a 32K-entry gshare. A partial-update
policy was used in these experiments.
The curves for gskewed and enhanced gskewed are nearly indistinguishable
up to a certain history length. After this point, which
is different for each benchmark, the curves begin to diverge, with
enhanced gskewed exhibiting lower mispredication rates at longer
history lengths.
Based on our simulation results, 8 to 10 seems to be a reasonable
choice for history length for a 3x4K-entry gskewed table, while for
enhanced gskewed, 11 or 12 would be a better choice.
Notice that the 3x4K-entry enhanced gskewed performs as well
as the 32K-entry gshare on all our benchmarks and for all history
lengths, but with less than half of the storage requirements.
7 Conclusions and Future Work
Aliasing effects in branch-predictor tables have been recently
identified as a significant contributor to branch-misprediction rates.
To better understand and minimize this source of prediction error,
we have proposed a new branch-aliasing classification, inspired by
the three-Cs model of cache performance.
Although previous branch-prediction research has shown how
to reduce compulsory and capacity aliasing, little has been done to
reduce conflict aliasing. To that end, we have proposed skewed
branch prediction, a technique that distributes branch predictors
across multiple banks using distinct and independent hashing func-
tions; multiple predictors are read in parallel and a majority vote is
used to arrive at an overall prediction.
Our analytical model explains why skewed branch prediction
works: in a standard one-bank table, the misprediction overhead
increases linearly with the aliasing probability, but in an M-bank
skewed organization, it increases as an M-th degree polynomial.
Because we deal with per-bank aliasing probabilities, which range
from 0 to 1, a polynomial growth rate is preferable to a linear one.
The redundancy in a skewed organization increases the amount
of capacity aliasing, but our simulation results show that this negative
effect is more than compensated for by the reduction in conflict
aliasing when using a partial-update policy.
For tables of 2-bit predictors and equal lengths of global his-
tory, a 3-bank skewed organization consistently outperforms a standard
1-bank organization for all configurations with comparable total
storage requirements. We found the update policy to be an important
factor, with partial update consistently outperforming total
update. Although 5-bank (or greater) configurations are possible,
our simulations showed that the improvement over a 3-bank configuration
is negligible. We also found skewed branch prediction to be
less sensitive to pathological cases (e.g., nroff in Figure 6).
To reduce capacity aliasing further, we proposed the enhanced
skewed branch predictor, which was shown to consistently reach
the performance level of a conventional gshare predictor of more
than twice the same size.
In addition to these performance advantages, skewed organizations
offer a chip designer an additional degree of flexibility when
allocating die area. Die-area constraints, for example, may not permit
increasing a 1-bank predictor table from 16K to 32K, but a
skewed organization offers a middle point: 3 banks of 8K entries
apiece for a total of 24K entries.
In this paper, we have only addressed aliasing on prediction
schemes using a global history vector. The same technique could
be applied to remove aliasing in other prediction methods, including
per-address history schemes [18, 19, 20], or hybrid schemes
[8, 2, 1, 4].
Skewed branch prediction raises some new questions:
ffl Update Policies: Are there policies other than partial-update
and total-update that offer better performance in skewed or
enhanced skewed branch predictors?
4.6
history length
enh. gskewed 3x4k
gskewed 3x4k
gshare 32k4567
history length
enh. gskewed 3x4k
gskewed 3x4k
gshare
history length
enh. gskewed 3x4k
gskewed 3x4k
gshare 32k
nroff real gcc verilog2.42.83.23.64
history length
enh. gskewed 3x4k
gskewed 3x4k
gshare
history length
enh. gskewed 3x4k
gskewed 3x4k
gshare 32k4567
history length
enh. gskewed 3x4k
gskewed 3x4k
gshare 32k
Figure
12: Misprediction percentage of enhanced gskewed
ffl Distributed Predictor Encodings: In our simulations we
adopted the standard 2-bit predictor encodings and simply
replicated them across 3 banks. Do there exist alternative "dis-
predictor encodings that are more space efficient,
and more robust against aliasing?
Minimizing Capacity Aliasing: Skewed branch predictors
are very effective in reducing conflict-aliasing effects, but they
do so at the expense of increased capacity aliasing. Do there
exist other techniques, like those used in the enhanced skewed
predictor, that could minimize these effects?
--R
Alternative implementations of hybrid branch predictors.
Branch clas- sification: a new mechanism for improving branch predictor performance
Analysis of branch prediction via data compression.
Using hybrid branch predictors to improve branch prediction accuracy in the presence of context switches.
An analysis of dynamic branch prediction schemes on system workloads.
Aspects of Cache Memory and Instruction Buffer Performance.
Branch prediction strategies and branch target buffer design.
Combining branch predictors.
Dynamic path-based branch correlation
Improving the accuracy of dynamic branch prediction using branch correlation.
Correlation and aliasing in dynamic branch predictors.
A case for two-way skewed-associative caches
Skewed associative caches.
A study of branch prediction strategies.
Efficient simulation of caches under optimal replacement with applications to miss
The influence of branch prediction table interference on branch prediction scheme performance.
Coping with code bloat.
Alternative implementations of two-level adaptive branch prediction
A comparison of dynamic branch predictors that use two levels of branch history.
A comparative analysis of schemes for correlated branch prediction.
--TR
Two-level adaptive training branch prediction
Alternative implementations of two-level adaptive branch prediction
Improving the accuracy of dynamic branch prediction using branch correlation
A case for two-way skewed-associative caches
A comparison of dynamic branch predictors that use two levels of branch history
Efficient simulation of caches under optimal replacement with applications to miss characterization
Branch classification
A comparative analysis of schemes for correlated branch prediction
Instruction fetching
The influence of branch prediction table interference on branch prediction scheme performance
Dynamic path-based branch correlation
Alternative implementations of hybrid branch predictors
Using hybrid branch predictors to improve branch prediction accuracy in the presence of context switches
An analysis of dynamic branch prediction schemes on system workloads
Correlation and aliasing in dynamic branch predictors
Analysis of branch prediction via data compression
Skewed-associative Caches
A study of branch prediction strategies
Aspects of cache memory and instruction buffer performance
--CTR
Mitchell H. Clifton, Logical conditional instructions, Proceedings of the 37th annual Southeast regional conference (CD-ROM), p.24-es, April 1999
Shlomo Reches , Shlomo Weiss, Implementation and Analysis of Path History in Dynamic Branch Prediction Schemes, IEEE Transactions on Computers, v.47 n.8, p.907-912, August 1998
A. N. Eden , T. Mudge, The YAGS branch prediction scheme, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.69-77, November 1998, Dallas, Texas, United States
Marius Evers , Sanjay J. Patel , Robert S. Chappell , Yale N. Patt, An analysis of correlation and predictability: what makes two-level branch predictors work, ACM SIGARCH Computer Architecture News, v.26 n.3, p.52-61, June 1998
Chunrong Lai , Shih-Lien Lu , Yurong Chen , Trista Chen, Improving branch prediction accuracy with parallel conservative correctors, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Alexandre Farcy , Olivier Temam , Roger Espasa , Toni Juan, Dataflow analysis of branch mispredictions and its application to early resolution of branch outcomes, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.59-68, November 1998, Dallas, Texas, United States
Toni Juan , Sanji Sanjeevan , Juan J. Navarro, Dynamic history-length fitting: a third level of adaptivity for branch prediction, ACM SIGARCH Computer Architecture News, v.26 n.3, p.155-166, June 1998
Pierre Michaud , Andr Seznec , Stphan Jourdan, An Exploration of Instruction Fetch Requirement in Out-of-Order Superscalar Processors, International Journal of Parallel Programming, v.29 n.1, p.35-58, February 2001
Chih-Chieh Lee , I-Cheng K. Chen , Trevor N. Mudge, The bi-mode branch predictor, Proceedings of the 30th annual ACM/IEEE international symposium on Microarchitecture, p.4-13, December 01-03, 1997, Research Triangle Park, North Carolina, United States
Artur Klauser , Srilatha Manne , Dirk Grunwald, Selective Branch Inversion: Confidence Estimation for Branch Predictors, International Journal of Parallel Programming, v.29 n.1, p.81-110, February 2001
E. F. Torres , P. Ibanez , V. Vinals , J. M. Llaberia, Store Buffer Design in First-Level Multibanked Data Caches, ACM SIGARCH Computer Architecture News, v.33 n.2, p.469-480, May 2005
Veerle Desmet , Hans Vandierendonck , Koen De Bosschere, Clustered indexing for branch predictors, Microprocessors & Microsystems, v.31 n.3, p.168-177, May, 2007
Juan C. Moure , Domingo Bentez , Dolores I. Rexachs , Emilio Luque, Wide and efficient trace prediction using the local trace predictor, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Renju Thomas , Manoj Franklin , Chris Wilkerson , Jared Stark, Improving branch prediction by dynamic dataflow-based identification of correlated branches from a large global history, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Zhijian Lu , John Lach , Mircea R. Stan , Kevin Skadron, Alloyed branch history: combining global and local branch history for robust performance, International Journal of Parallel Programming, v.31 n.2, p.137-177, April
Abhas Kumar , Nisheet Jain , Mainak Chaudhuri, Long-latency branches: how much do they matter?, ACM SIGARCH Computer Architecture News, v.34 n.3, p.9-15, June 2006
Adi Yoaz , Mattan Erez , Ronny Ronen , Stephan Jourdan, Speculation techniques for improving load related instruction scheduling, ACM SIGARCH Computer Architecture News, v.27 n.2, p.42-53, May 1999
Jared Stark , Marius Evers , Yale N. Patt, Variable length path branch prediction, ACM SIGPLAN Notices, v.33 n.11, p.170-179, Nov. 1998
Andr Seznec , Stephen Felix , Venkata Krishnan , Yiannakis Sazeides, Design tradeoffs for the Alpha EV8 conditional branch predictor, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Tao Li , Lizy Kurian John , Anand Sivasubramaniam , N. Vijaykrishnan , Juan Rubio, Understanding and improving operating system effects in control flow prediction, ACM SIGPLAN Notices, v.37 n.10, October 2002
Timothy H. Heil , Zak Smith , J. E. Smith, Improving branch predictors by correlating on data values, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.28-37, November 16-18, 1999, Haifa, Israel
Alex Ramirez , Josep L. Larriba-Pey , Mateo Valero, Software Trace Cache, IEEE Transactions on Computers, v.54 n.1, p.22-35, January 2005
Andre Seznec, Analysis of the O-GEometric History Length Branch Predictor, ACM SIGARCH Computer Architecture News, v.33 n.2, p.394-405, May 2005
J. Gonzlez , A. Gonzlez, Control-Flow Speculation through Value Prediction, IEEE Transactions on Computers, v.50 n.12, p.1362-1376, December 2001
Gabriel H. Loh, Exploiting data-width locality to increase superscalar execution bandwidth, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Daniel A. Jimnez, Code placement for improving dynamic branch prediction accuracy, ACM SIGPLAN Notices, v.40 n.6, June 2005
Daniel A. Jimnez , Calvin Lin, Neural methods for dynamic branch prediction, ACM Transactions on Computer Systems (TOCS), v.20 n.4, p.369-397, November 2002
Hans Vandierendonck , Koen De Bosschere, XOR-Based Hash Functions, IEEE Transactions on Computers, v.54 n.7, p.800-812, July 2005
Kevin Skadron , Pritpal S. Ahuja , Margaret Martonosi , Douglas W. Clark, Branch Prediction, Instruction-Window Size, and Cache Size: Performance Trade-Offs and Simulation Techniques, IEEE Transactions on Computers, v.48 n.11, p.1260-1281, November 1999 | 3 C's classification;aliasing;skewed branch predictor;branch prediction |
264266 | Scheduling and data layout policies for a near-line multimedia storage architecture. | Recent advances in computer technologies have made it feasible to provide multimedia services, such as news distribution and entertainment, via high-bandwidth networks. The storage and retrieval of large multimedia objects (e.g., video) becomes a major design issue of the multimedia information system. While most other works on multimedia storage servers assume an on-line disk storage system, we consider a two-tier storage architecture with a robotic tape library as the vast near-line storage and an on-line disk system as the front-line storage. Magnetic tapes are cheaper, more robust, and have a larger capacity; hence, they are more cost effective for large scale storage systems (e.g., videoon-demand (VOD) systems may store tens of thousands of videos). We study in detail the design issues of the tape sub-system and propose some novel tape-scheduling algorithms which give faster response and require less disk buffer space. We also study the disk-striping policy and the data layout on the tape cartridge in order to fully utilize the throughput of the robotic tape system and to minimize the on-line disk storage space. | Introduction
In the past few years, we have witnessed tremendous advances in computer technologies,
such as storage architectures (e.g. fault tolerant disk arrays and parallel I/O architec-
tures), high speed networking systems (e.g., ATM switching technology), compression and
coding algorithms. These advances have made it feasible to provide multimedia services,
such as multimedia mail, news distribution, advertisement, and entertainment, via high
bandwidth networks. Consequently, research in multimedia storage system has received
a lot of attention in recent years. Most of the recent research works have emphasized
upon the investigation of the design of multimedia storage server systems with magnetic
disks as the primary storage. In [2, 7], issues such as real-time playback of multiple audio
channels have been studied. In [17], the author presented a technique for storing video
and audio streams individually on magnetic disk. The same author proposed in [16]
techniques for merging storage patterns of multiple video or audio streams to optimize
the disk space utilization and to maximize the number of simultaneous streams. In [9],
performance study was carried out on a robotic storage system. In [4, 5], a novel storage
structure known as the staggered striping technique was proposed as an efficient way
for the delivery of multiple video or audio objects with different bandwidth demands to
multiple display stations. In [8], a hierarchical storage server was proposed to support a
continuous display of audio and video objects for a personal computer. In [11], the authors
proposed a cost model for data placement on storage devices. Finally, a prototype
of a continuous media disk storage server was described in [12].
It is a challenging task to implement a cost-effective continuous multimedia storage
system that can store many large multimedia objects (e.g., video), and at the same time,
can allow the retrieval of these objects at their playback bandwidths. For example, a
100 minutes HDTV video requires at least 2 Mbytes/second display bandwidth and 12
Gbytes of storage[3]. A moderate size video library with 1000 videos would then require
TBytes storage. It would not be cost effective to implement and manage such a huge
amount of data all on the magnetic disk subsystem. A cost-effective alternative is to
store these multimedia objects permanently in a robotic tape library and use a pool of
magnetic disks, such as disk arrays [15], for buffering and distribution. In other words,
the multimedia objects reside permanently on tapes, and are loaded onto the disks for
delivery when requested by the disk server. To reduce the tape access delays, the most
actively accessed videos would also be stored in the disks on a long term basis. The disk
array functions as a cache for the objects residing in the tape library, as well as a buffer
for handling the bandwidth mismatch of the tape drive and the multimedia objects.
Given the above architecture, this paper aims at the design of a high performance
storage server with the following requirements:
ffl Minimal disk buffer space between the robotic tape library and the parallel disk
array. Disk space is required for handling the bandwidth mismatch of large multi-media
objects, such as video or HDTV, and the tape subsystem.
ffl Minimal response time for the request to the multimedia storage system. The
response time of a request to a large multimedia object can be greatly reduced by
organizing the display unit, the network device, the parallel disk and the robotic
tape library as a pipeline such that data flows at the continuous rate of the display
bandwidth of the multimedia object along the pipeline. Since multimedia objects
reside in the tape subsystem, to minimize the system response time, we have to
minimize the tape subsystem response time. Throughout this paper, the tape
subsystem response time is defined as the arrival time of the first byte of data of a
request to the disk array minus the arrival time of the request to the multimedia
storage system.
ffl Maximal bandwidth utilization of the tape drives. The current tape library architectures
usually have few tape drives. Hence, the bandwidth utilization of tape
drives is a major factor of the average response time and throughput of the storage
server. A better utilization of the bandwidth of tape drives means a higher
throughput of the tape subsystem.
The contribution of this paper is twofold. First, we propose a novel scheduling approach
for the tape subsystem, and we show that the approach can reduce the system
response time, increase the system throughput and lower the disk buffer requirement.
Secondly, we study the disk block organization of the disk subsystem and show how it
can be incorporated with the tape subsystem to support concurrent upload and playback
of large multimedia objects.
The organization of the paper is as follows. We describe the architecture of our
multimedia storage system and present the tape subsystem scheduling algorithms in
Sections 2 and 3 respectively. Then, we discuss the disk buffer requirement for supporting
various tape subsystem scheduling algorithms in Section 4. In Section 5, we describe
the disk block organization and the data layout on the tape cartridge for supporting
concurrent upload and playback of large multimedia objects. In Section 6, we discuss
the performance study, and lastly the conclusion is given in Section 7.
Our multimedia storage system consists of a robotic tape library and a parallel disk array.
The robotic tape library has a robotic arm, multiple tape drives, tape cartridges which
multimedia objects reside, and tape cartridge storage cells for placing tape cartridges.
Figure
1 illustrates the architectural view of the multimedia storage server. The robotic
arm, under computer control, can load and unload tape cartridges. To load a tape
cartridge into a tape drive, the system performs the following steps:
1. Wait for a tape drive to become available.
2. If a tape drive is available but occupied by another tape (ex: this is the tape that
was uploaded for a previous request), eject the tape in the drive and unload this
tape to its storage cell in the library. We call these operations as the drive eject
operation and the robot unload operation respectively.
3. Fetch the newly requested tape from its storage cell and load it into the ready
tape drive. We call these operations as the robot load operation and the drive load
operation respectively.
When a multimedia object is requested, the multimedia object is first read from
the tape and stored in the disk drives via the memory buffer and the CPU. Then the
multimedia object is played back by retrieving the data blocks of the multimedia object
from the disk drives, at a continuous rate of the object bandwidth, into the main memory
while the storage server sends the data blocks in the main memory to the playback unit
via the network interface. Frequently accessed multimedia objects can be cached in the
disk drives to reduce tape access and improve system response time as well as throughput.
We define the notations for the robotic tape library in Table 1. These notations are
useful for the performance study in later sections.
display
display
display
or
Parallel Disk
Array
Disk Controller
memory buffer
Robotic Tape
Library
Tape Controller
cartridge
cell robot
arm
tape
cartridge
Figure
1: Cost-effective multimedia storage server
It is important to point out that the parameter values of a robotic tape library can
vary greatly from system to system. For instance, Table 2 shows the typical numbers for
two commercial storage libraries.
3 Tape Subsystem and Scheduling Algorithms
In this section, we describe several tape drive scheduling algorithms for our multimedia
storage system. A typical robotic tape library has one robot arm and a small number of
tape drives. A request to the tape library demands reading (uploading) a multimedia object
from a tape cartridge. The straight-forward algorithm or the conventional algorithm
to schedule a tape drive is to serve request one by one, i.e. the tape drive reads the whole
multimedia object of the current request to the disk array before reading the multimedia
object of the next request in the queue. Since the number of tape drives is small and the
r number of robotic arms.
number of tape drives.
l tape drive load time.
drive eject time.
drive rewind time.
drive search time.
T u robot load or unload time.
drive transfer rate.
display bandwidth of object O.
S(O) size of object O.
Table
1: Notations used for the robotic tape library.
reading time of a multimedia object is quite long 1 , a new request will often have to wait
for an available tape drive. The conventional algorithm performs reasonably well when
the tape drive has a bandwidth lower than the display bandwidth of the multimedia objects
being requested. However, the conventional algorithm would not result in the good
request response time when the tape drive bandwidth is the total display bandwidth of
two or more objects. To illustrate this, suppose the tape library is an Ampex DST800 2
with one tape drive. Consider the situation in which two requests of 100 minutes different
objects, each with a display bandwidth of 2 Mbytes/second. These
two requests arrive at the same time when the tape drive is idle. The video object size is
equal to the display duration times the display bandwidth, which is equal to 100 \Theta 60 \Theta 2
Mbytes. With the conventional algorithm, the transfer of the first request
starts after a robot load operation and a drive load operation. The response time
It takes 1200 seconds to upload a 1 hour HDTV video object by a tape drive with 6 Mbytes/second
bandwidth.
2 the parameter values are in Table 2.
Parameter Exabyte120 Ampex DST800
average T l 35.4 5
average T e 16.5 seconds 4 seconds
seconds 12-13 seconds
average T s 45 seconds 15 seconds
Number of tapes 116 256
Tape Capacity 5 Gbytes 25 Gbytes
Table
2: Typical parameter values of two commercial storage libraries.
of the first request is:
However, the second request will have to wait for the complete transfer of the first
multimedia object request, rewinding that tape (T r ), ejecting that tape from the drive
unloading that tape from the tape drive to its cell by the robot (T u ). Then the
robot can load the newly requested tape (T u ) and load it into the tape drive (T l ). The
response time of the second request is:
Hence, the average response time of the two requests is 450 seconds. This scenario is
illustrated in Figure 2
TransferR
Request 1 Request 2
drive load, drive eject,
Figure
2: the conventional tape scheduling algorithm
The major problem about the conventional algorithm is that multiple requests can
arrive within a short period of time and the average request response time is significantly
increased due to the large service time of individual requests. Since the tape drive of the
Ampex system is several times the display bandwidth of the multimedia objects, the tape
drive can serve the two requests in a time-slice manner such that each request receives
about half the bandwidth of the tape drive.
Suppose the tape drive serves the two requests in a time-slice manner with a transfer
period of 300 seconds as illustrated in Figure 3. The two objects are being uploaded into
the disk array at an average rate of 6.5 Mbytes/second 3 . From Figure 3, the response time
of the first and second requests are T u seconds and T u +T l +300+T e +T u +T u +T l
seconds respectively. Hence, the average response time is (15
seconds or an improvement of 60%. We argue that the time-slice scheduling
Request 2
Request 1
drive load, drive eject,
Figure
3: the time-slice tape scheduling algorithm
algorithm can be implemented with small overheads. In some tape systems, for instance,
the D2 tapes used in the Ampex robot system, have the concept of zones[1]. Zones are
3 The overhead of tape switch is approximately10% of the transfer time. Hence the effective bandwidth
of the tape drive is 13.05 Mbytes/second or 6.5 Mbytes/second for each object.
the places on the tape where the tape can drift to when we stop reading from the tape.
The function of the zone is that the tape drive can start reading from the zone rather
than rewinding to the beginning of the tape when the tape drive reads the tape again.
The time-slice algorithm has the following advantages:
ffl The average response time is greatly improved in light load conditions.
ffl In the case that the request of a multimedia object can be canceled after uploading
some or all parts of the object into the disks (ex: customers may want to cancel
the movie due to emergency or the poor entertainment value of the movie), the
waste of tape drive bandwidth for uploading unused parts of multimedia objects is
reduced.
ffl The time-slice algorithm requires less disk buffer space than the conventional algo-
rithm. The discussion of disk buffer space requirement is given in Section 4.
However, the time-slice algorithm requires more tape switches and therefore has a
higher tape switch overheads and a higher chance of robot arm contention. Our goal
is to study several versions of the time-slice scheduling algorithm which can minimize
the average response time of requests to the multimedia storage system and also, find
the point of switch from the time-slice algorithm to the conventional tape scheduling
algorithm. In the rest of this section, we will describe each scheduling algorithm in
detail.
3.1 Conventional Algorithms
The conventional algorithm is any non-preemptive scheduling algorithm, such as the
First-Come-First-Served algorithm. As each request arrives, the request joins the
request queue. A request in the request queue is said to be ready if the tape cartridge of
the request is not being used to serve another request. The simplest scheduling algorithm
is the FCFS algorithm. The FCFS algorithm selects the oldest ready request in the queue
for reading when a tape drive is available. A disadvantage of the FCFS algorithm is that
the response time of a short request can be greatly increased by any preceding long
requests [18].
Another possible conventional algorithm is the Shortest-Job-First (SJF) algorithm.
The SJF algorithm improves the average response time by serving the ready request
with the shortest service time where the service time of a request is the time required
to complete the tape switch, the data transfer, and the tape rewind operation of the
request. However, a risk of using the SJF algorithm is the possibility of starvation for
longer requests as long as there is steady supply of shorter requests.
The implementations of the FCFS and SJF algorithms are similar. We have to separate
the implementation into two cases, (1) where there is only a single tape drive
available for the tape subsystem and, (2) where there are multiple tape drives in the tape
subsystem.
Single Tape Drive The implementation of the conventional algorithms is straight-forward
and is shown as follow:
procedure conventional();
begin
while true do
begin
if (there is no ready request) then
wait for a ready request;
get a ready request from the request queue;
serve the request;
Multiple Tape Drives The implementation of the conventional algorithms consists of
several procedures. The procedure robot is instantiated once and procedure tape is
instantiated N t times where each instance of procedure tape corresponds to a physical
tape drive and each instance has an unique ID.
procedure conventional()
begin
run robot() as a process;
for i := 0 to NUM TAPE -1 do
run tape(i) as a process;
procedure robot();
begin
while true do
begin
/* accept new request */
if (a request is ready and a tape drive is available) then
begin
get a request from the request queue;
send the request to an idle tape drive;
else
if (an available drive is occupied) then
perform the drive unload operation and the robot unload operation;
else
wait for a ready request or an occupied available drive;
procedure tape(integer id);
begin
while true do
begin
wait for a request from robot arm;
serve the request;
3.2 Time-slice Algorithms
The time-slice algorithms classify requests into two types: (1) non-active requests and (2)
active requests. Newly arrived requests are first classified as non-active requests and put
into the request queue. A non-active request is said to be ready when the tape cartridge
is not being used for serving another request. Active requests are those requests being
served by the tape drive in a time-slice manner.
Since the time-slice algorithms are viable only if the tape switch overhead is small,
we restrict that the tape rewind operation to be performed when a request has been
completely served and the tape search operation is performed only at the beginning of
the service of a request. This implies that two requests of the same tape cannot be served
concurrently. Note that the chance of having two requests of the same tape in the system
is very small because (1) the access distribution of objects is highly skewed since video
rental statistics suggest some highly skewed access distributions, such as the 80/20 rule,
in which 80 percent of accesses go to the most popular 20 percent of the data [6] and,
(2) frequently accessed objects are kept in the disk drives.
The tape switch time is equal to the total time to complete a tape drive eject opera-
tion, a robot unload operation, a robot load operation, a tape drive load operation and
a tape search operation. In the remaining of the paper, we let H to be the maximum
tape switch time. The time-slice algorithms break a request into many tasks, each with
a unique task number. Each task of the same request is served separately in the order
of increasing task number. Each request is assigned a time-slice period, s, which is the
maximum service time of a task of the request. The service time of a task includes the
time required for the tape switch and the data transfer of the task. For the last task of
a request, the service time also includes the time required for a tape rewind operation.
There are many possible ways to serve several requests in a time-slice manner. We concentrate
on two representative time-slice algorithms: the Round-robin (RR) algorithm
and the Least Slack (LS) algorithm.
3.2.1 Round-robin Algorithm
In this section, we formally describe the Round-robin algorithm.
be the active requests and R n+1 ; :::; Rm be the ready non-active requests
where m n. Let O i be the video object requested by R i for m. Let
be the time-slice periods assigned to R
With the Round-robin algorithm, the active requests are served in a round-robin
manner. In each round of service, one task of each active request will be served. The
active requests are served in the same order in each round of service. In order to satisfy the
bandwidth requirement of active request R i , the average transfer bandwidth allocated
for R i must be geater than or equal to the bandwidth of R i . Formally speaking, the
bandwidth requirement of R i is satisfied if
The Round-robin algorithm maintains the following condition:
The condition guarantees that the bandwidth requirements of the active requests are
satisfied. The efficiency of the algorithm is defined as:
(1)
When the system is lightly loaded, the tape drive can serve at least one more request
in addition to the currently active requests, the average response time is reduced for
a smaller time-slice period because a newly arrival is less likely to have to wait for a
long period. However, a smaller time-slice period means that a smaller number of active
requests can be served simultaneously, thereby increasing the chance that a newly arrived
request has to wait for the completion of an active request. Therefore, different time-slice
periods or different efficiencies of the time-slice algorithm are required to optimize
the average response time at different load conditions. To simplify our discussion, we
assume each request has the same time-slice period in the rest of the paper unless we
state otherwise.
The specification of the Round-robin algorithm is:
Simple Round-robin Algorithm. The algorithm assigns each active request
a time-slice period of s ? H seconds which has to satisfy the following
conditions:
Condition 1. The tape drive serves requests R 1 ; :::; R n in a round-robin
manner with a time-slice period of s seconds.
Condition 2. In each time slice period, the available time for data transfer
is the task being served is not the last task of an
active request, otherwise, the available time for data transfer
is the rewind time of the tape.
Condition 3. Request R n+1 which is the oldest ready non-active request
becomes active if
The straight-forward implementation of the simple Round-robin algorithm is to consider
whether more active requests can be served concurrently at the end of a service
round, i.e., the algorithm evaluates Condition 3 at the end of each service round. We
call this implementation as the RR-1 algorithm. Again, we separate the implementation
into two cases, (1) where there is only a single tape drive in the tape subsystem and, (2)
where there are multiple tape drives in the tape subsystem.
Single Tape Drive
procedure RR-1();
begin
while true do
begin
if (there is no active request and ready non-active request) then
wait for a ready non-active request;
if (the last active request is served and Condition 3
of the Simple Round-robin Algorithm is satisfied) then
accept a ready non-active request;
get a task from the active task queue;
serve the task;
With the RR-1 algorithm, a newly arrived request has to wait for one half of the
duration of a service round when the tape subsystem can serve at least one more request
in addition to the currently active requests. Since the duration of a service round grows
linearly with the number of active requests, the average waiting time of a request is high
when there are several active requests. To improve the above situation, we can check
whether one more request can be served by the tape subsystem after every completion
of an active task. We call this improved implementation of the Round-robin algorithm
as the RR-2 algorithm.
Multiple Tape Drives For the case of multiple tape drives, we have to consider the
robot arm contention because the tape drives need to wait for the robot arm to load or
unload. In the worst case, each tape switch requires a robot load operation and a robot
unload operation. Therefore, the worst case robot waiting time is 2 \Theta T u \Theta (N t \Gamma 1).
Hence, Condition 3 can be revised to become:
3.2.2 The Least-slack (LS) Algorithm
Let us study another version of time-slice algorithm which can improve on the response
time of the multimedia request. In order to maintain the playback continuity of an object,
task i of the request of the object must start to transfer data before finishing the playback
of the data of the previous task i \Gamma 1. We define the latest start time of transfer (LSTT)
of a task of an active request as the latest time that the task has to start to transfer
data in order to maintain the playback continuity of the requested object. Formally, the
LSTT of task J i is defined as:
request arrival time request response time if J i is the
first task
time of the data of task J
The slack time of a task is defined as max(LSTT of task \Gamma current time; 0). Let
be the time required to complete the data transfer of J i and the tape
rewind operation of J i (if J i is the last task of a request). The deadline of a task J i is
defined as:
A ready non-active request R can become active when the tasks of R can be served
immediately such that each task of an active request can be served at or before its
LSTT.
LS Algorithm The algorithm serves requests with the following conditions:
Condition 1. Each active task can be served in one time-slice period of s
seconds.
Condition 2. Active tasks are served in ascending order of slack time.
Condition 3. In each time slice period, the available time for data transfer
is the task being served is not the last task of an
active request, otherwise, the available time for data transfer
is the rewind time of the tape.
Condition 4. The data transfer of each active task can start at or before
the LSTT of the active task.
Condition 5. A ready non-active request can become active if Condition 4
is not violated after the request has become active.
We choose the LS algorithm for tape scheduling because the LS algorithm is optimal
for a single tape system [14] 4 in the sense that if scheduling can be achieved by any
algorithm, it can be achieved by the optimal algorithm.
For the case that the tape subsystem has only one robot arm and one tape drive,
Condition 4 of the LS Algorithm can be rewritten as follows.
Given a robotic tape library with a single tape drive, let J be the
active tasks listed in ascending order of slack time. If no active task is in service, then
Condition 4 of the LS algorithm is equivalent to the condition that each active task can
be completed at or before its deadline. In other words, Condition 4 of the LS algorithm
is equivalent to the following condition:
is the tape switch time of J i .
4 The paper discussed scheduling in single and multiple processors. The case of a single tape drive
robot library is equivalent to the case of a single processor described in the paper.
Proof: Assume there is no active task in service. By Equation (2), an active task can
start data transfer at or before its LSTT if and only if it can be completed at or before
its deadline. A task J k can be completed at or before its deadline if and only if the
time between the current time and the deadline of J k is enough to complete J k and its
preceding tasks. Therefore, Condition 4 of the LS algorithm is equivalent to
Again, we separate the implementation into two cases, (1) where there is only a single
tape drive in the tape subsystem and, (2) where there are multiple tape drives in the
tape subsystem. The implementation of the LS algorithm for the single tape case is as
follows:
Single Tape Drive
procedure LS();
begin
while true do
begin
if (there is no request) then
wait for a new request;
if (there is a ready request is non-empty and acceptnew()) then
begin
get the oldest ready request;
put the tasks of the request into the active task queue;
get the active task with the least slack time;
serve the task;
begin
float work;
pointer x;
if (the active task queue is empty) then
work := 0.0;
save the active task queue;
put the tasks of the oldest ready request into the active task queue;
while task queue is not empty do
begin
x := next active task;
work
current time) then
begin
restore the active task queue;
return(false)
restore the active task queue;
Multiple Tape Drives This implementation consists of two procedures: robot and
tape drive. Procedure robot performs the following steps repeatedly: accept a ready
request if the request can be accepted to become active immediately; if there are active
tasks and an idle tape, then send the active task with the least slack time to an idle
tape, else wait for an idle tape or an active task. Procedure tape repeatedly waits for an
active task and performs the sequence of a drive eject operation, a drive load operation,
a data transfer, and a tape rewind operation (for the last task of a request). Procedure
robot is instantiated once and procedure tape is instantiated N t times. Each instance
of procedure tape has an unique ID.
4 Disk Buffer Space Requirement
In this section, we study the disk buffer requirement for the various scheduling algorithms
that we have described. First, we show that the conventional algorithm (the FCFS or the
SJF algorithm) requires a huge amount of buffer space to achieve the maximum through-
put. The following theorem states the buffer space requirement for the conventional
algorithm.
Theorem 1 If each object of a request is of the same size S and same display bandwidth
then the conventional algorithm requires O( B
buffer space in order to
achieve its maximum throughput where the sustained tape throughput is B
t and it is equal
to SB t
Proof: The tape subsystem achieves its maximum throughput when (1) there is infinite
number of ready requests and (2) each request does not have a search time, i.e., the
requested object resides at the beginning of the tape cartridge and the tape drive can
start to read the object right after the drive load operation has been done. The sustained
bandwidth of tape subsystem is:
the tape subsystem is idle and starts to serve requests one by one. In time
interval (0, S
data are consumed at the rate of B d (O) and uploaded at the rate of
t . Hence, at time
t \GammaB d (O))S
buffer space is required to hold the accumulated
data. In time interval [ S
data are consumed at the rate of 2B d (O) and uploaded
at the rate of B
t . Therefore, at time
t \GammaB d (O))S
t \Gamma2B d (O))S
buffer space is
required to hold the accumulated data. This argument continues until the total object
display throughput matches with the tape sustained throughput. To obtain the upper
bound buffer requirement, assume we have a tape system whose sustained throughput
satisfies the following criteria:
then the upper bound buffer requirement is:
obtain the lower bound buffer requirement, assume we have a tape system whose
sustained throughput B l
satisfies the following criteria:
then the lower bound buffer requirement is:
1)STherefore, the buffer space requirement is O( B
For example,
the disk buffer size = 38.22 Gbytes.
Corollary 1 If there are N t tape drives in the tape library system. The buffer disk buffer
requirement is O( N t B
In the following theorem, we state the disk buffer requirement for the Round-robin
time-slice algorithm.
Theorem 2 If R 1 , ., R n are the active requests that satisfy the following condition:
then the Round-robin algorithm achieves the bandwidth requirements of the requested
objects, O 1 ; :::; O n iff the disk buffer size is
Proof: For R i (1 i n), at least two disk buffers of size (s i \Gamma H)B t required for
concurrent uploading and display of object O i . Hence the necessary condition is proved.
Suppose for each request O i , there are two disk buffers b i1 and b i2 , each with size of
While one buffer is used for uploading the multimedia object from the tape
library, the other buffer is used for displaying object O i . At steady state, the maximum
period between an available buffer till the time of uploading from tape is
b i1 has just been available, the system starts to output data from the other buffer b i2 for
display. By the condition of the theorem, b i2 will not be emptied before the tape drive
starts to upload data to b i1 . Hence, the bandwidth of O i is satisfied.
5 equivalent to 1.5 hours of display time
With the same arguments, we have the following corollary for the disk buffer requirement
for the LS algorithm.
Corollary 2 If R 1 , ., R n are the active requests that satisfy the following condition:
then the LS algorithm achieves the bandwidth requirements of the requested objects, O 1 ,
iff the disk buffer size is
By Theorem 2 and Corollary 2, the LS and Round-robin algorithms require less buffer
than the conventional algorithm for the same throughput because the transfer time of
each time slice period, s can be chosen to be much smaller than the total upload
period of the object, S
5 The Disk Subsystem
Since the tape drive bandwidth or the object bandwidth can be higher than the band-width
of a single disk drive, we have to use striping techniques to achieve the required
bandwidth of the tape drive or the object. In [4], a novel architecture known as the
Staggered Striping technique was proposed for high bandwidth objects, such as HTDV
video objects. It has been shown that Staggered Striping has a better throughput than
the simple striping and virtual data replication techniques for various system loads [4]. In
this section, we show how to organize the disk blocks in Staggered Striping together with
the robotic tape subsystem so that (1) the bandwidths of the disks and the tape drives
are matched, and (2) concurrent upload and display of multimedia objects is supported.
5.1 Staggered Striping
We first give a brief review of the staggered striping architecture. With this technique,
an object O is divided into subobjects, U i , which are further divided into MO fragments.
A fragment is the unit of data transferred to and from a single disk drive. The disk drives
are clustered into logical groups. The disk drives in the same logical group are accessed
concurrently to retrieve a subobject (U i ) at a rate equivalent to B d (O). The Stride, k,
is the distance 6 between the first fragment of U i and the first fragment of U i+1 . The
relationships of the above parameters are shown below:
e where B disk is the bandwidth of a single disk drive.
ffl The size of a subobject = MO \Theta the size of a fragment.
ffl A unit of time = the time required for reading a fragment from a single disk drive.
Note that a subobject can be loaded from the disk drives into the main memory in one
time unit. To reduce the seek and rotational overheads, the fragment size is chosen to be a
multiple of the size of a cylinder. A typical 1.2 Gbytes disk drive consists of 1635 cylinders
which are of size 756000 bytes each and has a peak transfer rate of 24 Mbit/second, a
minimum disk seek time of 4 milliseconds, a maximum disk seek time of 35 milliseconds,
and a maximum latency of 16.83 milliseconds. For a fragment size of 2 cylinders, the
maximum seek and latency delay times of the first cylinder and the second cylinder
are milliseconds and milliseconds respectively. The
transfer time of two cylinders is 481 milliseconds. The total service time (including disk
seek, latency delay, and disk transfer time) of a fragment is 553.66 milliseconds. Hence,
the seek and rotational overheads is about 13% of the disk bandwidth 7 . To simplify
6 which is measured in number of disks
7 A further increase in number of cylinders does not result in much reduction of the overhead. Hence,
a fragment of 2 cylinders is reasonable assumption.
our discussion, we assume the fragment size is two cylinders and one unit of time is 0.55
seconds. To illustrate the idea of Staggered Striping, we consider the following example:
Example 1 Figure 4 shows the retrieval pattern of a 5.0 Mbytes/second object in five
2.5 Mbytes/second disk drives. The stride is 1 and MO is 2. When the object is read for
display, subobject U 0 is read from disk drives 0 and 1 and so on.
disk
time 4 U4.1 U4.0
6 U6.0 U6.1
9 U9.1 U9.0
Figure
4: Retrieval pattern of an object.
5.2 Layout of Storage on the Tape
In the following discussion, we assume that (1) staggered striping is used for the storage
and retrieval of objects in the disk drives and, (2) the memory buffer between the tape
drives and the disk drives is much smaller in size than a fragment.
Let the effective bandwidth for the time slice algorithm be B
t which is equal to s\GammaH
We show that the storage layout of an object on the tape must match the storage layout
on the disk drives so as to achieve maximum throughput of the tape drive. When the
object is displayed, each fragment requires a bandwidth of B d (O)
MO . Therefore, the tape
drive produces NO fragments where
c in a unit of time. The blocks of NO
fragments are stored in a round-robin manner such that the NO fragments are produced
as NO continuous streams of data at the same time. Consider the case described in
Example 1. Suppose B
3. If the subobjects are
stored in the following In the first time
unit, U 0:0 , U 0:1 , U 1:0 are read from the tape drive. At the same time, U 0:0 and U 0:1 are
stored in disk drive 0 and disk drive 1. Fragment U 1:0 has to be discarded and re-read in
the next time unit because disk drive 1 can only store either U 0:1 or U 1:0 . Since the output
rate of tape drive must match the input rate of disk drives, the effective bandwidth of
the tape drive is 5 Mbytes/second and the tape drive bandwidth cannot be fully utilized.
On the other hand, if the storage layout of the object is as follows: fU 0:0 ; U 0:1 ; U 1:1 g,
In each time unit, the output fragments from the
tape drive can be stored in 3 consecutive disk drives. Hence, the bandwidth of the tape
drive is fully utilized. Figure 5 shows the timing diagram for the upload of the object
from the tape drive. From time 2, subobject U 0 can be read from disk drives 0 and 1.
Hence, the object can be displayed at time 2 while the remaining subobjects are being
uploaded into the disk drives from the tape drive. Both the bandwidth of the disk drives
and the tape drive are fully utilized.
Now we should derive the conditions of matching the way that the fragments are
retrieved from the disk and the way that the fragments are uploaded from the tape. Let
D and k be the number of disk drives of the disk array and the stride respectively. In the
Zg is a representation which shows that the blocks of X, Y, and Z are stored in a round-robin
manner.
disk
time 3 U4.1 U7.1 U4.0
6 U10.1 U11.1 U8.0
9 U14.1 U11.0 U14.0
Figure
5: Upload pattern of an object.
rest of the section, we assume that the bandwidth of tape drive is at least (MO+1) \Theta B d (O)
MO
Given an object O which has been uploaded from a tape drive into the disk
array, the retrieval pattern RO of O is an L \Theta D matrix where L is the number of time
units required for the retrieval of O from the disk drives and RO (i; j) is equal to "U a:b "
if fragment U a:b of O is read at time i from disk drive j. RO (i; contains a blank entry
if no fragment is read from disk drive j at time i.
Given an object O, the upload pattern PO of O is an L \Theta D matrix where
L is the number of time units required for uploading O and PO (i; j) from a tape drive
into the disk array is equal to "U a:b " if fragment U a:b is read at time i and stored in disk
drive j. RO (i; contains a blank entry if no fragment is stored in disk drive j at time
i.
Definition 3 The storage pattern LP of a retrieval or upload pattern P is an L \Theta D
matrix where L is an integer and LP (i; j) the i-th non-blank entry of column j of P, i.e.
LP is obtained by replacing all the blank entries of P by lower non-blanking entries of
the same column with the preservation of the row-order of the entries, i.e., 8LP (a; b) and
Examples of retrieval and upload patterns are shown in Figures 4 and 5 respectively.
The retrieval and upload patterns of Figure 4 and Figure 5 have the same storage pattern
which is shown in Figure 6.
disk
Figure
An example of storage pattern.
staggered striping, when an object O is uploaded from a tape drive into
the disk array, the tape drive bandwidth can be fully utilized if
ffl the tape drive reads NO fragments of O into NO different disk drives in each unit
of time; and
ffl the storage patterns of the retrieval pattern and upload pattern of O are the same,
Proof: Assume that the retrieval pattern and the upload pattern of O have the same
storage pattern and the tape drive reads NO fragments into NO different disk drives.
Since the retrieval pattern and the upload pattern has the same storage pattern, each
uploaded fragment (from the tape drive) can be retrieved from its storage disk for display.
Since the tape drive reads NO fragments in each unit of time and all uploaded fragments
(from the tape drive) can be retrieved from the storage disks, the bandwidth of the tape
drive is fully utilized.
Definition 4 An object is said to be uniformly distributed over a set of disk drives if
each disk drive contains the same number of fragments of the object.
Theorem 3 With staggered striping, when an object O is uploaded from a tape drive to
the disk array, the tape drive bandwidth can be fully utilized if
1. k and D do not have a common factor greater than 1, i.e. the greatest common
divisor (GCD) of k and D is 1, and
2. the data transfer period, s \Gamma H, is a multiple of LCM(D;MO ;N O )
is the least common multiple of integers x, y, z.
Proof: Suppose the GCD of k and D is 1 and s \Gamma H is a multiple of LCM(D;MO ;N O )
units.
Consider the case that the object starts to be uploaded at time 0. At time i, NO
fragments has been stored and uniformly distributed into disk drives (i \Theta
(i D. Since the GCD of k and D is 1,
mod D is a one-one mapping. If we extend the domain of
f to the set of natural numbers N , then This implies that
fragments can be uniformly distributed over the disk drives. Hence,
at time LCM(D;MO ;N O )
stored and uniformly distributed
over the disk drives of the disk buffer.
Consider the case that the object is playbacked at time 0. At time i, MO fragments are
retrieved from disk drives (i \Theta
At time LCM(D;MO ;N O )
retrieved and LCM(D;MO ;N O )
fragments have been retrieved from each disk drive. Let O 0 be the object consisting of
the fragments. The following procedure finds the upload pattern PO 0
which has the same storage pattern of the retrieval pattern RO
procedure upload(var upattern : upload pattern; rpattern : retrieval pattern);
var
begin
initialize all the entries in count to 0;
initialize all the entries in upattern to blank;
spattern := storage pattern of rpattern;
for
for
begin
c := (i*k+j) mod D;
With upload pattern PO 0 , the tape reads NO different fragments into NO different
disk drives in each time unit and the storage pattern of the retrieval pattern and the
upload pattern of O 0 are the same. By Lemma 2, O 0 can be retrieved with the maximum
throughput of the tape drive. Hence, an object of a multiple of the size of O 0 , i.e.
can be uploaded with the maximum throughput of the tape drive.
Thus, if the data transfer period is a multiple of LCM(D;MO ;N O )
NO
, the tape drive bandwidth
can be fully utilized.
For the case of Example 1, the data transfer period is a multiple of LCM(5;2;3)= 10
time units or 5:5 seconds. For the case that seconds, a reasonable time-slice
period is from 200 to 300 seconds 9 . A video on demand system with a capacity of 1000
100-minutes HDTV videos of 2 Mbytes/second bandwidth requires a storage space of
1000 \Theta 12 Mbytes = 12 TBytes. If 10% of the videos reside on disks, 1.2 TBytes disk
space is required. The number of 1.2 Gbytes disk drives of the disk array is 1200, and the
data transfer period is a multiple of LCM(1200;2;3)= 400 time units = 400 \Theta 0.55 seconds
seconds or the time-slice period is 250 seconds. Hence, the disk array of 1200 disk
drives can be used as a disk buffer as well as a disk cache.
To maximize the tape drive throughput, the maximum output rate of the disk buffer
9 For this range of time-slice period, the tape switch overhead is about 10-15% of the tape drive
bandwidth.
must be at least the maximum utilized bandwidth of the tape drive. The maximum
utilized bandwidth of the tape drive is given by NOB d (O)
MO
. To have an output rate of at
least the maximum utilized bandwidth of the tape drive, the disk buffer must support
concurrent retrieval of at least d NO
MO e subobjects. For each tape drive, the minimum
number of required disk drive for buffering is NO
MO e \Theta MO .
Video uploading from the tape drive is first stored in the disk array. The playback of
the video object can start when the cluster of disk drives for uploading does not overlap
with the cluster of disk drives of the first subobject. Hence, the minimum delay, d, of the
disk buffer is defined as the smallest integer n such that 80
MO . The stride k should be carefully chosen to minimize the disk buffer delay and
improve the overall response time of the storage server.
6 Performance Evaluation
We evaluate the the performance of the scheduling algorithms for two values of the tape
drive Mbytes/second and 15 Mbytes/second by computer simulation. We
assume that (1) each tape contains only one object, and hence the search time of each
request is 0 seconds and (2) a request never waits for a tape. Since frequently accessed
objects are kept in disk drives, the probability that a request has to wait for a tape which
is being used to serve another request is very low 10 . Hence, the second assumption causes
negligible errors in the simulation results. We assume that the disk contention between
disk reads (generated by playback of objects) and disk writes (generated by upload of
objects) is resolved by delaying disk writes [13] as follows. A fragment uploaded from the
tape is first stored in the memory buffer and written into its storage disk in an idle period
of the disk. This technique smoothes out the bursty data traffic from the disk subsystem
the probability is in the order of 0.001 for the parameters of the simulation
and hence improves request response time. In practice, the additional memory buffer
space required by this technique is small because the aggregate transfer rate of the tape
subsystem is much lower than that of the disk subsystem [13]. The storage size of each
object is uniformly distributed between (7200; 14400) Mbytes. Table 3 shows the major
simulation parameters. The results are presented with 95% confidence intervals where
the length of each confidence interval is bounded by 1%.
Parameter Case 1 Case 2
seconds 12 seconds
Table
3: simulation parameters
6.1 Single Tape Drive
We first study the performance of the algorithms in a system with one robot arm and
one tape drive. Here, the request arrival process is Poisson.
Case 1. Tape drive
The maximum throughput of the tape subsystem is 1.95 requests/hour. Table 4
presents the average response time of the FCFS, SJF, RR, and LS algorithms. Blank
entries in the table show that the tape subsystem has reached the maximum utilization
and the system cannot sustain the input requests. The efficiency of RR and LS algorithms
is defined as the percentage of time spent in data transfer. An efficiency of 90% means
that 10% of time is spent in tape switches. We define the relative response time to be the
ratio of the scheduling algorithm response time divided by the FCFS algorithm response
time. The relative response times of the SJF, RR, and LS algorithms are shown in Figure
7.
Req. Arr. FCFS SJF RR-1 RR-1 RR-1 LS LS LS
Rate
(req./hr.) (sec) (sec) (sec) (sec) (sec) (sec) (sec) (sec)
1.00 1022.84 936.15 867.36 1323.80 2426.81 652.23 1059.95 2069.68
Table
4: Response time vs request arrival rate.
Case 2. Tape drive bandwidth = 15 Mbytes/second.
The maximum throughput of the tape subsystem is 4.72 requests/hour. Here, we
consider a tape subsystem with a higher performance tape drive. The average response
time of the FCFS, SJF, RR, and LS algorithms are tabulated in Table 5. Again, those
Request Arrival Rate (req/hour)
Relative
Average
Response
Time
Figure
7: The relative response time of SJF, RR, and LS scheduling algorithms.
blank entries in the table represent a case whereby the tape subsystem has reached the
maximum utilization and the system cannot sustain the input requests. The relative
response time of RR and LS algorithms is shown in Figure 8.
Req. Arr. FCFS SJF RR-1 RR-1 RR-1 RR-2 RR-2 RR-2 LS LS LS
Rate
(req./hr.) (sec) (sec) (sec) (sec) (sec) (sec) (sec) (sec) (sec) (sec) (sec)
2.0 298.70 280.84 145.82 110.13 163.25 104.66 75.62 132.80 106.30 73.92 102.24
2.5 447.42 410.77 250.22 226.21 500.59 162.93 171.68 466.91 169.01 170.92 380.94
Table
5: Response time vs request arrival rate
In both cases, the LS algorithm has the best performance in a wide range of request
arrival rates. The simulation result shows that the time-slice algorithm (especially the
LS algorithm) performs better than the FCFS algorithm and the SJF algorithm under a
RR-2
RR-2
RR-2
Request Arrival Rate (req/hour)
Relative
Average
Response
Time
Figure
8: Relative response time of SJF, RR-1, RR-2, and LS algorithms.
wide range of request arrival rates. The SJF algorithm performs better than the FCFS
algorithm for all request arrival rates.
6.2 Multiple Tape Drives
Previous experiments have shown that LS and RR algorithms outperform the FCFS and
SJF algorithms in a wide range of load conditions. We study the effect of robot arm
contention of the LS algorithm in this experiment.
The system contains 4 tape drives which have a bandwidth of 15.0 Mbytes/second.
The maximum throughput of the tape subsystem is 18.90 requests/hour. The results are
shown in Table 6. A plot of the relative response time vs arrival rate is shown in Figure
9.
In this simulation experiment, we found out that for the large range of request arrival
rates, the utilization of robot arm is very small. For example, the robot arm utilization
is only 0:215 when the request arrival rate is 14:0 requests/hour. Hence, the effect of
Request Arrival Rate FCFS SJF LS (E=0.9)
2.0 19.19 19.18 15.13
4.0 25.11 25.02 16.22
6.0 36.05 35.60 20.60
Table
Multiple tape drives case: response time vs request arrival rate.
robot arm contention is not a major factor in determining the average response time.
6.3 Throughput Under Finite Disk Buffer
In this section, we study the maximum throughput of the FCFS, SJF, and LS algorithms
with finite disk buffer space. The maximum throughput of the scheduling algorithm is
found by a close queueing network in which there are 200 clients and each client initiates
a new request immediately after its previous request has been served. Hence, there are
always 200 requests in the system. The maximum throughput of the LS, FCFS, and SJF
algorithms are evaluated for Cases 1 and 2. In each case, the size of each disk buffer is
chosen to be large enough to store the data uploaded from a tape drive in one time-slice
period. The efficiency of the LS algorithm is chosen to be 0.9, and therefore, the time-slice
period is 300 seconds. The disk buffer sizes of Case 1 and 2 are 1.582 Gbytes and
3.955 Gbytes respectively. The results for Case 1 and Case 2 are shown in Figure 10 and
Request Arrival Rate (req/hour)
Relative
Average
Response
Time
Figure
9: Relative response time of the SJF and LS algorithms
Figure
respectively. From the figures, we observe that the LS algorithm has much
higher throughput (in some cases, we have 50 % improvement) than the FCFS and SJF
algorithms in a wide range of number of disk buffers. The throughput of each algorithm
grows with the number of disk buffers but the LS algorithm reaches its maximumpossible
throughput with about half of the buffer requirement that the FCFS algorithm needs to
achieve its maximum possible throughput. The SJF algorithm performs slightly better
than the FCFS algorithm. The FCFS (or SJF) algorithm performs better than the LS
algorithm for about 10% when the disk buffer space is large enough.
6.4 Discussion of Results
The results show that the LS and Round-robin algorithms outperform the conventional
algorithms (FCFS and SJF) in a wide range of request arrival rates. In all the cases, the
LS algorithm with 90% efficiency outperforms the FCFS algorithm and the SJF algorithm
when the request arrival rate is below 60% of the maximum throughput of the tape
subsystem. The conventional algorithms have a better response time when the request
FCFS
Number of Disk Buffers
Maximum
Throughput
(req./hr.)
Figure
10: Maximum throughput of the FCFS, SJF, and LS algorithms
arrival rate is quite high (above 70% of the maximum throughput of the conventional
algorithms). For the LS or Round-robin algorithm, the algorithm performs better with
a lower efficiency factor at low request arrival rate and better with a higher efficiency
factor at high request arrival rate. The results also show that the relative response time
of the LS and Round-robin algorithms reach a minimum at certain request arrival rate.
This is because the response time is the sum of the waiting time W and the tape switch
time H. At low request arrival rate, H is the major component of the response time.
As the request arrival rate increases from zero, the waiting time of the conventional
algorithms grows faster than that of the LS and Round-robin algorithms because the LS
and Round-robin algorithms can serve several requests at the same time and hence reduce
the chance of waiting for available tape drive. Therefore, the relative response time of
the LS and Round-robin algorithms decreases with the increase of request arrival rate
when the request arrival is low. When the request arrival rate is high enough, the waiting
time of LS and Round-robin algorithms becomes higher than that of the conventional
algorithms because the conventional algorithms have a better utilization of the tape drive
bandwidth which covers the high load conditions.
FCFS
Number of Disk Buffers
Maximum
Throughput
(req./hr.)
Figure
Maximum throughput of the FCFS, SJF, and LS algorithms
7 Concluding Remarks
In this paper, we have proposed a cost-effective near-line storage system for a large
scale multimedia storage server using a robotic tape library. We have studied a class of
novel time-slice scheduling algorithms for the tape subsystem and have shown that under
light to moderate workload, this class of tape scheduling algoritms has better response
time and requires less disk buffer space than the conventional algorithm. Also, we have
complemented our work to the proposed Staggered Striping architecture [4] and showed
that using our proposed scheduling algorithms, how we can organize the data layout on
disks and tape cartridges for concurrent upload and display of large multimedia objects.
From the performance results, the selection of time-slice value is often more important
than the choice of the time-slice algorithm used. If the request arrival process is known in
advance (i.e. the average request arrival rate and the inter-arrival time distribution are
known), the time-slice value can be adjusted by using pre-computed results (obtained by
either analytical methods or simulations). In practical environments, the request arrival
process is usually not known in advance. One simple method that can be used is to
adjust the time-slice value according to the length of the queue of waiting requests, i.e., a
larger time-slice value is required if the length of the queue is longer. The function from
the queue length to the time-slice value can be pre-determined by empirical studies. In
general, the optimal time-slice value depends on the request arrival process, the number
of requests waiting for service, and the states of the currently active requests. Further
work is required to find the best way to determine the optimal time-slice value.
--R
The Ampex DST800 Robotic Tape Library Technical Marketing Document.
"A File System for Continuous Media,"
"Channel Coding for Digital HDTV Terrestrial Broadcasting,"
"Staggered Striping in Multi-media Information Systems,"
"A Fault Tolerant Design of a Multimedia Server,"
An Evaluation of New Applications
"Principles of Delay-Sensitive Multimedia Data Storage and Retrieval,"
"On Multimedia Repositories, Personal Com- puters, and Hierarchical Storage Systems,"
"Analysis of Striping Techniques in Robotic Storage Libraries,"
"Video On Demand: Architecture, Systems, and Applications,"
"Using
"The Design of a Storage Server for Continuous Me- dia,"
"Scheduling and Replacement Policies for a Hierarchical Multimedia Storage Server,"
"Multiprocessor Scheduling in a Hard Real-Time Environment,"
"A case for Redundant Arrays of Inexpensive Disks (RAID),"
"Efficient Storage Techniques for Digital Continuous Multimedia,"
"Designing an On-Demand Multimedia Service,"
Operating Systems
"Designing a Multi-User HDTV Storage Server,"
--TR
A case for redundant arrays of inexpensive disks (RAID)
Principles of delay-sensitive multimedia data storage retrieval
A file system for continuous media
Staggered striping in multimedia information systems
On multimedia repositories, personal computers, and hierarchical storage systems
Tertiary storage
Fault tolerant design of multimedia servers
Efficient Storage Techniques for Digital Continuous Multimedia
Using tertiary storage in video-on-demand servers
--CTR
Kien A. Hua , Ying Cai , Simon Sheu,
Patching
M. Y. Y. Leung , J. C. S. Lui , L. Golubchik, Use of Analytical Performance Models for System Sizing and Resource Allocation in Interactive Video-on-Demand Systems Employing Data Sharing Techniques, IEEE Transactions on Knowledge and Data Engineering, v.14 n.3, p.615-637, May 2002
S.-H. Gary Chan , Fouad A. Tobagi, Modeling and Dimensioning Hierarchical Storage Systems for Low-Delay Video Services, IEEE Transactions on Computers, v.52 n.7, p.907-919, July | multimedia storage;scheduling;data layout |
264406 | Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer. | A digital computer is generally believed to be an efficient universal computing device; that is, it is believed able to simulate any physical computing device with an increase in computation time by at most a polynomial factor. This may not be true when quantum mechanics is taken into consideration. This paper considers factoring integers and finding discrete logarithms, two problems which are generally thought to be hard on a classical computer and which have been used as the basis of several proposed cryptosystems. Efficient randomized algorithms are given for these two problems on a hypothetical quantum computer. These algorithms take a number of steps polynomial in the input size, e.g., the number of digits of the integer to be factored. | Introduction
One of the first results in the mathematics of computation, which underlies the subsequent
development of much of theoretical computer science, was the distinction between
computable and non-computable functions shown in papers of Church [1936], Turing
[1936], and Post [1936]. Central to this result is Church's thesis, which says that all
computing devices can be simulated by a Turing machine. This thesis greatly simplifies
the study of computation, since it reduces the potential field of study from any of
an infinite number of potential computing devices to Turing machines. Church's thesis
is not a mathematical theorem; to make it one would require a precise mathematical
description of a computing device. Such a description, however, would leave open the
possibility of some practical computing device which did not satisfy this precise mathematical
description, and thus would make the resulting mathematical theorem weaker
than Church's original thesis.
With the development of practical computers, it has become apparent that the distinction
between computable and non-computable functions is much too coarse; computer
scientists are now interested in the exact efficiency with which specific functions
can be computed. This exact efficiency, on the other hand, is too precise a quantity to
work with easily. The generally accepted compromise between coarseness and precision
distinguishes efficiently and inefficiently computable functions by whether the length of
the computation scales polynomially or superpolynomially with the input size. The class
of problems which can be solved by algorithms having a number of steps polynomial in
the input size is known as P.
For this classification to make sense, we need it to be machine-independent. That is,
we need to know that whether a function is computable in polynomial time is independent
of the kind of computing device used. This corresponds to the following quantitative
version of Church's thesis, which Vergis et al. [1986] have called the ``Strong Church's
Thesis" and which makes up half of the "Invariance Thesis" of van Emde Boas [1990].
Thesis (Quantitative Church's thesis). Any physical computing device can be simulated
by a Turing machine in a number of steps polynomial in the resources used by the
computing device.
In statements of this thesis, the Turing machine is sometimes augmented with a random
number generator, as it has not yet been determined whether there are pseudorandom
number generators which can efficiently simulate truly random number generators
for all purposes. Readers who are not comfortable with Turing machines may think
instead of digital computers having an amount of memory that grows linearly with the
length of the computation, as these two classes of computing machines can efficiently
simulate each other.
There are two escape clauses in the above thesis. One of these is the word "physical."
Researchers have produced machine models that violate the above quantitative Church's
thesis, but most of these have been ruled out by some reason for why they are not "phys-
ical," that is, why they could not be built and made to work. The other escape clause in
the above thesis is the word "resources," the meaning of which is not completely specified
above. There are generally two resources which limit the ability of digital computers
to solve large problems: time (computation steps) and space (memory). There are more
resources pertinent to analog computation; some proposed analog machines that seem
able to solve NP-complete problems in polynomial time have required the machining of
FACTORING WITH A QUANTUM COMPUTER 3
exponentially precise parts, or an exponential amount of energy. (See Vergis et al. [1986]
and Steiglitz [1988]; this issue is also implicit in the papers of Canny and Reif [1987]
and Choi et al. [1995] on three-dimensional shortest paths.)
For quantum computation, in addition to space and time, there is also a third potentially
important resource, precision. For a quantum computer to work, at least in any
currently envisioned implementation, it must be able to make changes in the quantum
states of objects (e.g., atoms, photons, or nuclear spins). These changes can clearly
not be perfectly accurate, but must contain some small amount of inherent impreci-
sion. If this imprecision is constant (i.e., it does not depend on the size of the input),
then it is not known how to compute any functions in polynomial time on a quantum
computer that cannot also be computed in polynomial time on a classical computer
with a random number generator. However, if we let the precision grow polynomially
in the input size (that is, we let the number of bits of precision grow logarithmically
in the input size), we appear to obtain a more powerful type of computer. Allowing
the same polynomial growth in precision does not appear to confer extra computing
power to classical mechanics, although allowing exponential growth in precision does
[Hartmanis and Simon 1974, Vergis et al. 1986].
As far as we know, what precision is possible in quantum state manipulation is dictated
not by fundamental physical laws but by the properties of the materials and the
architecture with which a quantum computer is built. It is currently not clear which
architectures, if any, will give high precision, and what this precision will be. If the precision
of a quantum computer is large enough to make it more powerful than a classical
computer, then in order to understand its potential it is important to think of precision
as a resource that can vary. Treating the precision as a large constant (even though it is
almost certain to be constant for any given machine) would be comparable to treating
a classical digital computer as a finite automaton - since any given computer has a
fixed amount of memory, this view is technically correct; however, it is not particularly
useful.
Because of the remarkable effectiveness of our mathematical models of computation,
computer scientists have tended to forget that computation is dependent on the laws of
physics. This can be seen in the statement of the quantitative Church's thesis in van
Emde Boas [1990], where the word "physical" in the above phrasing is replaced with
the word "reasonable." It is difficult to imagine any definition of "reasonable" in this
context which does not mean "physically realizable," i.e., that this computing machine
could actually be built and would work.
Computer scientists have become convinced of the truth of the quantitative Church's
thesis through the failure of all proposed counter-examples. Most of these proposed
counter-examples have been based on the laws of classical mechanics; however, the universe
is in reality quantum mechanical. Quantum mechanical objects often behave quite
differently from how our intuition, based on classical mechanics, tells us they should.
It thus seems plausible that the natural computing power of classical mechanics corresponds
to Turing machines, 1 while the natural computing power of quantum mechanics
might be greater.
I believe that this question has not yet been settled and is worthy of further investigation. See
Vergis et al. [1986], Steiglitz [1988], and Rubel [1989]. In particular, turbulence seems a good candidate
for a counterexample to the quantitative Church's thesis because the non-trivial dynamics on many
length scales may make it difficult to simulate on a classical computer.
4 P. W. SHOR
The first person to look at the interaction between computation and quantum mechanics
appears to have been Benioff [1980, 1982a, 1982b]. Although he did not ask
whether quantum mechanics conferred extra power to computation, he showed that reversible
unitary evolution was sufficient to realize the computational power of a Turing
machine, thus showing that quantum mechanics is at least as powerful computationally
as a classical computer. This work was fundamental in making later investigation of
quantum computers possible.
Feynman [1982,1986] seems to have been the first to suggest that quantum mechanics
might be more powerful computationally than a Turing machine. He gave arguments as
to why quantum mechanics might be intrinsically expensive computationally to simulate
on a classical computer. He also raised the possibility of using a computer based on
quantum mechanical principles to avoid this problem, thus implicitly asking the converse
question: by using quantum mechanics in a computer can you compute more efficiently
than on a classical computer? Deutsch [1985, 1989] was the first to ask this question
explicitly. In order to study this question, he defined both quantum Turing machines
and quantum circuits and investigated some of their properties.
The question of whether using quantum mechanics in a computer allows one to
obtain more computational power was more recently addressed by Deutsch and Jozsa
[1992] and Berthiaume and Brassard [1992a, 1992b]. These papers showed that there
are problems which quantum computers can quickly solve exactly, but that classical
computers can only solve quickly with high probability and the aid of a random number
generator. However, these papers did not show how to solve any problem in quantum
polynomial time that was not already known to be solvable in polynomial time with
the aid of a random number generator, allowing a small probability of error; this is
the characterization of the complexity class BPP, which is widely viewed as the class of
efficiently solvable problems.
Further work on this problem was stimulated by Bernstein and Vazirani [1993]. One
of the results contained in their paper was an oracle problem (that is, a problem involving
a "black box" subroutine that the computer is allowed to perform, but for which no code
is accessible) which can be done in polynomial time on a quantum Turing machine but
which requires super-polynomial time on a classical computer. This result was improved
by Simon [1994], who gave a much simpler construction of an oracle problem which takes
polynomial time on a quantum computer but requires exponential time on a classical
computer. Indeed, while Bernstein and Vaziarni's problem appears contrived, Simon's
problem looks quite natural. Simon's algorithm inspired the work presented in this
paper.
Two number theory problems which have been studied extensively but for which no
polynomial-time algorithms have yet been discovered are finding discrete logarithms and
factoring integers [Pomerance 1987, Gordon 1993, Lenstra and Lenstra 1993, Adleman
and McCurley 1995]. These problems are so widely believed to be hard that several
cryptosystems based on their difficulty have been proposed, including the widely used
RSA public key cryptosystem developed by Rivest, Shamir, and Adleman [1978]. We
show that these problems can be solved in polynomial time on a quantum computer
with a small probability of error.
Currently, nobody knows how to build a quantum computer, although it seems as
though it might be possible within the laws of quantum mechanics. Some suggestions
have been made as to possible designs for such computers [Teich et al. 1988, Lloyd 1993,
FACTORING WITH A QUANTUM COMPUTER 5
1994, Cirac and Zoller 1995, DiVincenzo 1995, Sleator and Weinfurter 1995, Barenco et
al. 1995b, Chuang and Yamomoto 1995], but there will be substantial difficulty in building
any of these [Landauer 1995a, Landauer 1995b, Unruh 1995, Chuang et al. 1995,
Palma et al. 1995]. The most difficult obstacles appear to involve the decoherence of
quantum superpositions through the interaction of the computer with the environment,
and the implementation of quantum state transformations with enough precision to give
accurate results after many computation steps. Both of these obstacles become more
difficult as the size of the computer grows, so it may turn out to be possible to build
small quantum computers, while scaling up to machines large enough to do interesting
computations may present fundamental difficulties.
Even if no useful quantum computer is ever built, this research does illuminate
the problem of simulating quantum mechanics on a classical computer. Any method of
doing this for an arbitrary Hamiltonian would necessarily be able to simulate a quantum
computer. Thus, any general method for simulating quantum mechanics with at most
a polynomial slowdown would lead to a polynomial-time algorithm for factoring.
The rest of this paper is organized as follows. In x2, we introduce the model of
quantum computation, the quantum gate array, that we use in the rest of the paper.
In xx3 and 4, we explain two subroutines that are used in our algorithms: reversible
modular exponentiation in x3 and quantum Fourier transforms in x4. In x5, we give
our algorithm for prime factorization, and in x6, we give our algorithm for extracting
discrete logarithms. In x7, we give a brief discussion of the practicality of quantum
computation and suggest possible directions for further work.
Quantum computation
In this section we give a brief introduction to quantum computation, emphasizing the
properties that we will use. We will describe only quantum gate arrays, or quantum
acyclic circuits, which are analogous to acyclic circuits in classical computer science.
For other models of quantum computers, see references on quantum Turing machines
[Deutsch 1989, Bernstein and Vazirani 1993, Yao 1993] and quantum cellular automata
[Feynman 1986, Margolus 1986, 1990, Lloyd 1993, Biafore 1994]. If they are allowed
a small probability of error, quantum Turing machines and quantum gate arrays can
compute the same functions in polynomial time [Yao 1993]. This may also be true for
the various models of quantum cellular automata, but it has not yet been proved. This
gives evidence that the class of functions computable in quantum polynomial time with
a small probability of error is robust, in that it does not depend on the exact architecture
of a quantum computer. By analogy with the classical class BPP, this class is called
BQP.
Consider a system with n components, each of which can have two states. Whereas
in classical physics, a complete description of the state of this system requires only n
bits, in quantum physics, a complete description of the state of this system requires
To be more precise, the state of the quantum system is a
point in a 2 n -dimensional vector space. For each of the 2 n possible classical positions
of the components, there is a basis state of this vector space which we represent, for
example, by j011 meaning that the first bit is 0, the second bit is 1, and so on.
Here, the ket notation jxi means that x is a (pure) quantum state. (Mixed states will
6 P. W. SHOR
not be discussed in this paper, and thus we do not define them; see a quantum theory
book such as Peres [1993] for this definition.) The Hilbert space associated with this
quantum system is the complex vector space with these 2 n states as basis vectors, and
the state of the system at any time is represented by a unit-length vector in this Hilbert
space. As multiplying this state vector by a unit-length complex phase does not change
any behavior of the state, we need only numbers to completely describe
the state. We represent this superposition of states as
a
where the amplitudes a i are complex numbers such that
each jS i i
is a basis vector of the Hilbert space. If the machine is measured (with respect to
this basis) at any particular step, the probability of seeing basis state jS i i is ja
however, measuring the state of the machine projects this state to the observed basis
vector jS i i. Thus, looking at the machine during the computation will invalidate the
rest of the computation. In this paper, we only consider measurements with respect
to the canonical basis. This does not greatly restrict our model of computation, since
measurements in other reasonable bases could be simulated by first using quantum
computation to perform a change of basis and then performing a measurement in the
canonical basis.
In order to use a physical system for computation, we must be able to change the
state of the system. The laws of quantum mechanics permit only unitary transformations
of state vectors. A unitary matrix is one whose conjugate transpose is equal to
its inverse, and requiring state transformations to be represented by unitary matrices
ensures that summing the probabilities of obtaining every possible outcome will result
in 1. The definition of quantum circuits (and quantum Turing machines) only allows
local unitary transformations; that is, unitary transformations on a fixed number of
bits. This is physically justified because, given a general unitary transformation on n
bits, it is not at all clear how one would efficiently implement it physically, whereas
two-bit transformations can at least in theory be implemented by relatively simple
physical systems [Cirac and Zoller 1995, DiVincenzo 1995, Sleator and Weinfurter 1995,
Chuang and Yamomoto 1995]. While general n-bit transformations can always be
built out of two-bit transformations [DiVincenzo 1995, Sleator and Weinfurter 1995,
Lloyd 1995, Deutsch et al. 1995], the number required will often be exponential in n
[Barenco et al. 1995a]. Thus, the set of two-bit transformations form a set of building
blocks for quantum circuits in a manner analogous to the way a universal set of classical
gates (such as the AND, OR and NOT gates) form a set of building blocks for classical
circuits. In fact, for a universal set of quantum gates, it is sufficient to take all one-bit
gates and a single type of two-bit gate, the controlled NOT, which negates the second
bit if and only if the first bit is 1.
Perhaps an example will be informative at this point. A quantum gate can be
expressed as a truth table: for each input basis vector we need to give the output of the
gate. One such gate is:
FACTORING WITH A QUANTUM COMPUTER 7
Not all truth tables correspond to physically feasible quantum gates, as many truth
tables will not give rise to unitary transformations.
The same gate can also be represented as a matrix. The rows correspond to input
basis vectors. The columns correspond to output basis vectors. The (i;
when the ith basis vector is input to the gate, the coefficient of the jth basis vector in
the corresponding output of the gate. The truth table above would then correspond to
the following matrix:
p:
A quantum gate is feasible if and only if the corresponding matrix is unitary, i.e., its
inverse is its conjugate transpose.
Suppose our machine is in the superposition of statespj10i
and we apply the unitary transformation represented by (2.2) and (2.3) to this state.
The resulting output will be the result of multiplying the vector (2.4) by the matrix
(2.3). The machine will thus go to the superposition of states2
This example shows the potential effects of interference on quantum computation. Had
we started with either the state j10i or the state j11i, there would have been a chance of
observing the state j10i after the application of the gate (2.3). However, when we start
with a superposition of these two states, the probability amplitudes for the state j10i
cancel, and we have no possibility of observing j10i after the application of the gate.
Notice that the output of the gate would have been j10i instead of j11i had we started
with the superposition of statespj10i
which has the same probabilities of being in any particular configuration if it is observed
as does the superposition (2.4).
If we apply a gate to only two bits of a longer basis vector (now our circuit must have
more than two wires), we multiply the gate matrix by the two bits to which the gate is
8 P. W. SHOR
applied, and leave the other bits alone. This corresponds to multiplying the whole state
by the tensor product of the gate matrix on those two bits with the identity matrix on
the remaining bits.
A quantum gate array is a set of quantum gates with logical "wires" connecting their
inputs and outputs. The input to the gate array, possibly along with extra work bits
that are initially set to 0, is fed through a sequence of quantum gates. The values of
the bits are observed after the last quantum gate, and these values are the output. To
compare gate arrays with quantum Turing machines, we need to add conditions that
make gate arrays a uniform complexity class. In other words, because there is a different
gate array for each size of input, we need to keep the designer of the gate arrays from
hiding non-computable (or hard to compute) information in the arrangement of the
gates. To make quantum gate arrays uniform, we must add two things to the definition
of gate arrays. The first is the standard requirement that the design of the gate array
be produced by a polynomial-time (classical) computation. The second requirement
should be a standard part of the definition of analog complexity classes, although since
analog complexity classes have not been widely studied, this requirement is much less
widely known. This requirement is that the entries in the unitary matrices describing
the gates must be computable numbers. Specifically, the first log n bits of each entry
should be classically computable in time polynomial in n [Solovay 1995]. This keeps
non-computable (or hard to compute) information from being hidden in the bits of the
amplitudes of the quantum gates.
3 Reversible logic and modular exponentiation
The definition of quantum gate arrays gives rise to completely reversible computation.
That is, knowing the quantum state on the wires leading out of a gate tells uniquely
what the quantum state must have been on the wires leading into that gate. This is a
reflection of the fact that, despite the macroscopic arrow of time, the laws of physics appear
to be completely reversible. This would seem to imply that anything built with the
laws of physics must be completely reversible; however, classical computers get around
this fact by dissipating energy and thus making their computations thermodynamically
irreversible. This appears impossible to do for quantum computers because superpositions
of quantum states need to be maintained throughout the computation. Thus,
quantum computers necessarily have to use reversible computation. This imposes extra
costs when doing classical computations on a quantum computer, as is sometimes
necessary in subroutines of quantum computations.
Because of the reversibility of quantum computation, a deterministic computation
is performable on a quantum computer only if it is reversible. Luckily, it has already
been shown that any deterministic computation can be made reversible [Lecerf 1963,
Bennett 1973]. In fact, reversible classical gate arrays have been studied. Much like
the result that any classical computation can be done using NAND gates, there are also
universal gates for reversible computation. Two of these are Toffoli gates [Toffoli 1980]
and Fredkin gates [Fredkin and Toffoli 1982]; these are illustrated in Table 3.1.
The Toffoli gate is just a controlled controlled NOT, i.e., the last bit is negated if
and only if the first two bits are 1. In a Toffoli gate, if the third input bit is set to 1,
then the third output bit is the NAND of the first two input bits. Since NAND is a
FACTORING WITH A QUANTUM COMPUTER 9
Table
3.1: Truth tables for Toffoli and Fredkin gates.
Toffoli Gate
INPUT OUTPUT
Fredkin Gate
INPUT OUTPUT
universal gate for classical gate arrays, this shows that the Toffoli gate is universal. In
a Fredkin gate, the last two bits are swapped if the first bit is 0, and left untouched if
the first bit is 1. For a Fredkin gate, if the third input bit is set to 0, the second output
bit is the AND of the first two input bits; and if the last two input bits are set to 0
and 1 respectively, the second output bit is the NOT of the first input bit. Thus, both
AND and NOT gates are realizable using Fredkin gates, showing that the Fredkin gate
is universal.
From results on reversible computation [Lecerf 1963, Bennett 1973], we can compute
any polynomialtime function F (x) as long as we keep the input x in the computer. We do
this by adapting the method for computing the function F non-reversibly. These results
can easily be extended to work for gate arrays [Toffoli 1980, Fredkin and Toffoli 1982].
When AND, OR or NOT gates are changed to Fredkin or Toffoli gates, one obtains
both additional input bits, which must be preset to specified values, and additional
output bits, which contain the information needed to reverse the computation. While
the additional input bits do not present difficulties in designing quantum computers,
the additional output bits do, because unless they are all reset to 0, they will affect the
interference patterns in quantum computation. Bennett's method for resetting these bits
to 0 is shown in the top half of Table 3.2. A non-reversible gate array may thus be turned
into a reversible gate array as follows. First, duplicate the input bits as many times as
necessary (since each input bit could be used more than once by the gate array). Next,
keeping one copy of the input around, use Toffoli and Fredkin gates to simulate non-reversible
gates, putting the extra output bits into the RECORD register. These extra
output bits preserve enough of a record of the operations to enable the computation of
the gate array to be reversed. Once the output F (x) has been computed, copy it into a
register that has been preset to zero, and then undo the computation to erase both the
first OUTPUT register and the RECORD register.
To erase x and replace it with F (x), in addition to a polynomial-time algorithm for F ,
we also need a polynomial-time algorithm for computing x from F (x); i.e., we need that
F is one-to-one and that both F and F \Gamma1 are polynomial-time computable. The method
for this computation is given in the whole of Table 3.2. There are two stages to this
computation. The first is the same as before, taking x to (x; F (x)). For the second
stage, shown in the bottom half of Table 3.2, note that if we have a method to compute
non-reversibly in polynomial time, we can use the same technique to reversibly map
F (x) to However, since this is a reversible computation,
P. W. SHOR
Table
3.2: Bennett's method for making a computation reversible.
we can reverse it to go from (x; F (x)) to F (x). Put together, these two pieces take x to
F (x).
The above discussion shows that computations can be made reversible for only a
constant factor cost in time, but the above method uses as much space as it does time.
If the classical computation requires much less space than time, then making it reversible
in this manner will result in a large increase in the space required. There are methods
that do not use as much space, but use more time, to make computations reversible
[Bennett 1989, Levine and Sherman 1990]. While there is no general method that does
not cause an increase in either space or time, specific algorithms can sometimes be
made reversible without paying a large penalty in either space or time; at the end of this
section we will show how to do this for modular exponentiation, which is a subroutine
necessary for quantum factoring.
The bottleneck in the quantum factoring algorithm; i.e., the piece of the factoring
algorithm that consumes the most time and space, is modular exponentia-
tion. The modular exponentiation problem is, given n, x, and r, find x r (mod n).
The best classical method for doing this is to repeatedly square of x (mod n) to
(mod n) for i - log 2 r, and then multiply a subset of these powers (mod n)
to get x r (mod n). If we are working with l-bit numbers, this requires O(l) squar-
ings and multiplications of l-bit numbers (mod n). Asymptotically, the best classical
result for gate arrays for multiplication is the Sch-onhage-Strassen algorithm
[Sch-onhage and Strassen 1971, Knuth 1981, Sch-onhage 1982]. This gives a gate array
for integer multiplication that uses O(l log l log log l) gates to multiply two l-bit numbers.
Thus, asymptotically, modular exponentiation requires O(l 2 log l log log l) time. Making
this reversible would na-ively cost the same amount in space; however, one can reuse the
space used in the repeated squaring part of the algorithm, and thus reduce the amount
of space needed to essentially that required for multiplying two l-bit numbers; one simple
method for reducing this space (although not the most versatile one) will be given later
in this section. Thus, modular exponentiation can be done in O(l 2 log l log log l) time
and O(l log l log log l) space.
While the Sch-onhage-Strassen algorithm is the best multiplication algorithm discovered
to date for large l, it does not scale well for small l. For small numbers, the best
gate arrays for multiplication essentially use elementary-school longhand multiplication
in binary. This method requires O(l 2 ) time to multiply two l-bit numbers, and thus
modular exponentiation requires O(l 3 time with this method. These gate arrays can
be made reversible, however, using only O(l) space.
We will now give the method for constructing a reversible gate array that takes only
FACTORING WITH A QUANTUM COMPUTER 11
O(l) space and O(l 3 ) time to compute (a; x a (mod n)) from a, where a, x, and n are
l-bit numbers. The basic building block used is a gate array that takes b as input and
outputs n). Note that here b is the gate array's input but c and n are built
into the structure of the gate array. Since addition (mod n) is computable in O(log n)
time classically, this reversible gate array can be made with only O(logn) gates and
O(logn) work bits using the techniques explained earlier in this section.
The technique we use for computing x a (mod n) is essentially the same as the classical
method. First, by repeated squaring we compute x 2 i
(mod n) for all i ! l. Then, to
obtain x a (mod n) we multiply the powers x 2 i
(mod n) where 2 i appears in the binary
expansion of a. In our algorithm for factoring n, we only need to compute x a (mod n)
where a is in a superposition of states, but x is some fixed integer. This makes things
much easier, because we can use a reversible gate array where a is treated as input,
but where x and n are built into the structure of the gate array. Thus, we can use the
algorithm described by the following pseudocode; here, a i represents the ith bit of a in
binary, where the bits are indexed from right to left and the rightmost bit of a is a 0 .
power
power := power x 2 i
(mod n)
endif
endfor
The variable a is left unchanged by the code and x a (mod n) is output as the variable
power . Thus, this code takes the pair of values (a; 1) to (a; x a (mod n)).
This pseudocode can easily be turned into a gate array; the only hard part of this
is the fourth line, where we multiply the variable power by x 2 i
(mod n); to do this we
need to use a fairly complicated gate array as a subroutine. Recall that x 2 i
(mod n)
can be computed classically and then built into the structure of the gate array. Thus,
to implement this line, we need a reversible gate array that takes b as input and gives
bc (mod n) as output, where the structure of the gate array can depend on c and n.
Of course, this step can only be reversible if gcd(c; n) = 1, i.e., if c and n have no
common factors, as otherwise two distinct values of b will be mapped to the same value
of bc (mod n); this case is fortunately all we need for the factoring algorithm. We will
show how to build this gate array in two stages. The first stage is directly analogous
to exponentiation by repeated multiplication; we obtain multiplication from repeated
addition (mod n). Pseudocode for this stage is as follows.
result := 0
result
endif
endfor
n) can be precomputed and built into the structure of the gate array.
P. W. SHOR
The above pseudocode takes b as input, and gives (b; bc (mod n)) as output. To
get the desired result, we now need to erase b. Recall that gcd(c; n) = 1, so there is
n). Multiplication by this c \Gamma1 could be used to
reversibly take bc (mod n) to (bc (mod n); bcc \Gamma1 (mod b). This is
just the reverse of the operation we want, and since we are working with reversible
computing, we can turn this operation around to erase b. The pseudocode for this
follows.
endif
endfor
As before, result i is the ith bit of result.
Note that at this stage of the computation, b should be 0. However, we did not set b
directly to zero, as this would not have been a reversible operation and thus impossible on
a quantum computer, but instead we did a relatively complicated sequence of operations
which ended with which in fact depended on multiplication being a group
(mod n). At this point, then, we could do something somewhat sneaky: we could
measure b to see if it actually is 0. If it is not, we know that there has been an error
somewhere in the quantum computation, i.e., that the results are worthless and we
should stop the computer and start over again. However, if we do find that b is 0,
then we know (because we just observed it) that it is now exactly 0. This measurement
thus may bring the quantum computation back on track in that any amplitude that b
had for being non-zero has been eliminated. Further, because the probability that we
observe a state is proportional to the square of the amplitude of that state, depending
on the error model, doing the modular exponentiation and measuring b every time that
we know that it should be 0 may have a higher probability of overall success than the
same computation done without the repeated measurements of b; this is the quantum
watchdog (or quantum Zeno) effect [Peres 1993]. The argument above does not actually
show that repeated measurement of b is indeed beneficial, because there is a cost (in time,
if nothing else) of measuring b. Before this is implemented, then, it should be checked
with analysis or experiment that the benefit of such measurements exceeds their cost.
However, I believe that partial measurements such as this one are a promising way of
trying to stabilize quantum computations.
Currently, Sch-onhage-Strassen is the algorithm of choice for multiplying very large
numbers, and longhand multiplication is the algorithm of choice for small numbers.
There are also multiplication algorithms which have efficiencies between these two al-
gorithms, and which are the best algorithms to use for intermediate length numbers
[Karatsuba and Ofman 1962, Knuth 1981, Sch-onhage et al. 1994]. It is not clear which
algorithms are best for which size numbers. While this may be known to some extent
for classical computation [Sch-onhage et al. 1994], using data on which algorithms work
better on classical computers could be misleading for two reasons: First, classical computers
need not be reversible, and the cost of making an algorithm reversible depends
on the algorithm. Second, existing computers generally have multiplication for 32- or
64-bit numbers built into their hardware, and this will increase the optimal changeover
FACTORING WITH A QUANTUM COMPUTER 13
points to asymptotically faster algorithms; further, some multiplication algorithms can
take better advantage of this hardwired multiplication than others. Thus, in order to
program quantum computers most efficiently, work needs to be done on the best way of
implementing elementary arithmetic operations on quantum computers. One tantalizing
fact is that the Sch-onhage-Strassen fast multiplication algorithm uses the fast Fourier
transform, which is also the basis for all the fast algorithms on quantum computers
discovered to date; it is tempting to speculate that integer multiplication itself might be
speeded up by a quantum algorithm; if possible, this would result in a somewhat faster
asymptotic bound for factoring on a quantum computer, and indeed could even make
breaking RSA on a quantum computer asymptotically faster than encrypting with RSA
on a classical computer.
4 Quantum Fourier transforms
Since quantum computation deals with unitary transformations, it is helpful to be able
to build certain useful unitary transformations. In this section we give a technique for
constructing in polynomial time on quantum computers one particular unitary transfor-
mation, which is essentially a discrete Fourier transform. This transformation will be
given as a matrix, with both rows and columns indexed by states. These states correspond
to binary representations of integers on the computer; in particular, the rows and
columns will be indexed beginning with 0 unless otherwise specified.
This transformations is as follows. Consider a number a with 0 - a ! q for some q
where the number of bits of q is polynomial. We will perform the transformation that
takes the state jai to the stateq 1=2
That is, we apply the unitary matrix whose (a; c) entry is 1
exp(2-iac=q). This Fourier
transform is at the heart of our algorithms, and we call this matrix A q .
Since we will use A q for q of exponential size, we must show how this transformation
can be done in polynomial time. In this paper, we will give a simple construction for A q
when q is a power of 2 that was discovered independently by Coppersmith [1994] and
Deutsch [see Ekert and Jozsa 1995]. This construction is essentially the standard fast
Fourier transform (FFT) algorithm [Knuth 1981] adapted for a quantum computer; the
following description of it follows that of Ekert and Jozsa [1995]. In the earlier version
of this paper [Shor 1994], we gave a construction for A q when q was in the special class
of smooth numbers with small prime power factors. In fact, Cleve [1994] has shown how
to construct A q for all smooth numbers q whose prime factors are at most O(logn).
us represent an integer a in binary as ja l\Gamma1 a For the
quantum Fourier transform A q , we only need to use two types of quantum gates. These
gates are R j , which operates on the jth bit of the quantum computer:
14 P. W. SHOR
and S j;k , which operates on the bits in positions j and k with
To perform a quantum Fourier transform, we apply the matrices
in the order (from left to right)
R
that is, we apply the gates R j in reverse order from R l\Gamma1 to R 0 , and between R j+1 and
R j we apply all the gates S j;k where k ? j. For example, on 3 bits, the matrices would
be applied in the order R 2 S 1;2 R 1 S 0;2 S 0;1 R 0 . To take the Fourier transform A q when
thus need to use l(l \Gamma 1)=2 quantum gates.
Applying this sequence of transformations will result in a quantum stateq 1=2
b exp(2-iac=q) jbi, where b is the bit-reversal of c, i.e., the binary number obtained
by reading the bits of c from right to left. Thus, to obtain the actual quantum
Fourier transform, we need either to do further computation to reverse the bits of jbi
to obtain jci, or to leave these bits in place and read them in reverse order; either
alternative is easy to implement.
To show that this operation actually performs a quantum Fourier transform, consider
the amplitude of going from First, the factors
of 1=
2 in the R matrices multiply to produce a factor of 1=q 1=2 overall; thus we need
only worry about the exp(2-iac=q) phase factor in the expression (4.1). The matrices
S j;k do not change the values of any bits, but merely change their phases. There is thus
only one way to switch the jth bit from a j to b j , and that is to use the appropriate entry
in the matrix R j . This entry adds - to the phase if the bits a j and b j are both 1, and
leaves it unchanged otherwise. Further, the matrix S j;k adds -=2 k\Gammaj to the phase if a j
and b k are both 1 and leaves it unchanged otherwise. Thus, the phase on the path from
jai to jbi is X
0-j!l
0-j!k!l
This expression can be rewritten as
0-j-k!l
Since c is the bit-reversal of b, this expression can be further rewritten as
0-j-k!l
Making the substitution l in this sum, we get
0-j+k!l
l a j c k (4.8)
FACTORING WITH A QUANTUM COMPUTER 15
Now, since adding multiples of 2- do not affect the phase, we obtain the same phase if
we sum over all j and k less than l, obtaining
where the last equality follows from the distributive law of multiplication. Now,
so the above expression is equal to 2-ac=q, which is
the phase for the amplitude of jai ! jci in the transformation (4.1).
large in the gate S j;k in (4.3), we are multiplying by a very small
phase factor. This would be very difficult to do accurately physically, and thus it would
be somewhat disturbing if this were necessary for quantum computation. Luckily, Coppersmith
[1994] has shown that one can define an approximate Fourier transform that
ignores these tiny phase factors, but which approximates the Fourier transform closely
enough that it can also be used for factoring. In fact, this technique reduces the number
of quantum gates needed for the (approximate) Fourier transform considerably, as it
leaves out most of the gates S j;k .
5 Prime factorization
It has been known since before Euclid that every integer n is uniquely decomposable
into a product of primes. Mathematicians have been interested in the question of how
to factor a number into this product of primes for nearly as long. It was only in the
1970's, however, that researchers applied the paradigms of theoretical computer science
to number theory, and looked at the asymptotic running times of factoring algorithms
[Adleman 1994]. This has resulted in a great improvement in the efficiency of factoring
algorithms. The best factoring algorithm asymptotically is currently the number field
sieve [Lenstra et al. 1990, Lenstra and Lenstra 1993], which in order to factor an integer
takes asymptotic running time exp(c(log n) 1=3 (log log n) 2=3 ) for some constant c.
Since the input, n, is only log n bits in length, this algorithm is an exponential-time
algorithm. Our quantum factoring algorithm takes asymptotically O((log n) 2 (log log n)
(log log log n)) steps on a quantum computer, along with a polynomial (in log n) amount
of post-processing time on a classical computer that is used to convert the output of
the quantum computer to factors of n. While this post-processing could in principle be
done on a quantum computer, there is no reason not to use a classical computer if they
are more efficient in practice.
Instead of giving a quantum computer algorithm for factoring n directly, we give a
quantum computer algorithm for finding the order of an element x in the multiplicative
group (mod n); that is, the least integer r such that x r j 1 (mod n). It is known that
using randomization, factorization can be reduced to finding the order of an element
[Miller 1976]; we now briefly give this reduction.
To find a factor of an odd number n, given a method for computing the order r
of x, choose a random x (mod n), find its order r, and compute gcd(x
gcd(a; b) is the greatest common divisor of a and b, i.e., the largest integer that divides
both a and b. The Euclidean algorithm [Knuth 1981] can be used to compute gcd(a; b)
in polynomial time. Since n), the gcd(x
P. W. SHOR
fails to be a non-trivial divisor of n only if r is odd or if x n). Using this
criterion, it can be shown that this procedure, when applied to a random x (mod n),
yields a factor of n with probability at least 1 \Gamma 1=2 is the number of distinct
odd prime factors of n. A brief sketch of the proof of this result follows. Suppose that
be the order of x (mod p a i
r is the least common multiple
of all the r i . Consider the largest power of 2 dividing each r i . The algorithm only fails
if all of these powers of 2 agree: if they are all 1, then r is odd and r=2 does not exist; if
they are all equal and larger than 1, then x
for every i. By the Chinese remainder theorem [Knuth 1981, Hardy and Wright 1979,
Theorem 121], choosing an x (mod n) at random is the same as choosing for each i a
number x i (mod p a i
i ) at random, where p a i
i is the ith prime power factor of n. The
multiplicative group (mod p ff ) for any odd prime power p ff is cyclic [Knuth 1981], so for
any odd prime power p a i
i , the probability is at most 1=2 of choosing an x i having any
particular power of two as the largest divisor of its order r i . Thus each of these powers
of 2 has at most a 50% probability of agreeing with the previous ones, so all k of them
agree with probability at most 1=2 k\Gamma1 , and there is at least a chance that
the x we choose is good. This scheme will thus work as long as n is odd and not a prime
finding factors of prime powers can be done efficiently with classical methods.
We now describe the algorithm for finding the order of x (mod n) on a quantum
computer. This algorithm will use two quantum registers which hold integers represented
in binary. There will also be some amount of workspace. This workspace gets reset to
after each subroutine of our algorithm, so we will not include it when we write down
the state of our machine.
Given x and n, to find the order of x, i.e., the least r such that x r j 1 (mod n), we
do the following. First, we find q, the power of 2 with We will not include
when we write down the state of our machine, because we never change these
values. In a quantum gate array we need not even keep these values in memory, as they
can be built into the structure of the gate array.
Next, we put the first register in the uniform superposition of states representing
numbers a (mod q). This leaves our machine in stateq 1=2
This step is relatively easy, since all it entails is putting each bit in the first register into
the superposition 1
Next, we compute x a (mod n) in the second register as described in x3. Since we
keep a in the first register this can be done reversibly. This leaves our machine in the
stateq 1=2
jai jx a (mod n)i : (5.2)
We then perform our Fourier transform A q on the first register, as described in x4,
mapping jai toq 1=2
FACTORING WITH A QUANTUM COMPUTER 17
That is, we apply the unitary matrix with the (a; c) entry equal to 1
exp(2-iac=q).
This leaves our machine in stateq
Finally, we observe the machine. It would be sufficient to observe solely the value
of jci in the first register, but for clarity we will assume that we observe both jci and
jx a (mod n)i. We now compute the probability that our machine ends in a particular
state
ff , where we may assume Summing over all possible ways
to reach the state
ff , we find that this probability is
a: x a jx k
where the sum is over all a, 0 - a ! q, such that x a j x k (mod n). Because the order
of x is r, this sum is over all a satisfying a j k (mod r). Writing a
that the above probability is
We can ignore the term of exp(2-ikc=q), as it can be factored out of the sum and has
magnitude 1. We can also replace rc with frcg q , where frcg q is the residue which is
congruent to rc (mod q) and is in the range \Gammaq=2 ! frcg q - q=2. This leaves us with
the expression fi fi fi fi fi fiq
exp(2-ibfrcg q =q)
We will now show that if frcg q is small enough, all the amplitudes in this sum will be
in nearly the same direction (i.e., have close to the same phase), and thus make the sum
large. Turning the sum into an integral, we obtainq
r cexp(2-ibfrcg q =q)db
If jfrcg q j - r=2, the error term in the above expression is easily seen to be bounded by
O(1=q). We now show that if jfrcg q j - r=2, the above integral is large, so the probability
of obtaining a state
ff is large. Note that this condition depends only on
c and is independent of k. Substituting in the above integral, we getr
Z r
r cexp
r u
du:
approximating the upper limit of integration by 1 results in only a O(1=q)
error in the above expression. If we do this, we obtain the integralr
Z 1exp
r u
du: (5.10)
P. W. SHOR0.020.060.100
c
Figure
5.1: The probability P of observing values of c between 0 and 255, given
and
Letting frcg q =r vary between \Gamma 1and 1, the absolute magnitude of the integral (5.10)
is easily seen to be minimized when frcg q which case the absolute value
of expression (5.10) is 2=(-r). The square of this quantity is a lower bound on the
probability that we see any particular state
ff with frcg q - r=2; this
probability is thus asymptotically bounded below by 4=(- 2 r 2 ), and so is at least 1=3r 2
for sufficiently large n.
The probability of seeing a given state
ff will thus be at least 1=3r 2 if
\Gammar
i.e., if there is a d such that
\Gammar
Dividing by rq and rearranging the terms gives
r
We know c and q. Because q ? n 2 , there is at most one fraction d=r with r ! n that
satisfies the above inequality. Thus, we can obtain the fraction d=r in lowest terms by
rounding c=q to the nearest fraction having a denominator smaller than n. This fraction
can be found in polynomial time by using a continued fraction expansion of c=q, which
FACTORING WITH A QUANTUM COMPUTER 19
finds all the best approximations of c=q by fractions [Hardy and Wright 1979, Chapter
X, Knuth 1981].
The exact probabilities as given by equation (5.7) for an example case with
and are plotted in Figure 5.1. The value could occur when factoring 33
if x were chosen to be 5, for example. Here q is taken smaller than 33 2 so as to make the
values of c in the plot distinguishable; this does not change the functional structure of
P(c). Note that with high probability the observed value of c is near an integral multiple
of
If we have the fraction d=r in lowest terms, and if d happens to be relatively prime
to r, this will give us r. We will now count the number of states
ff which
enable us to compute r in this way. There are OE(r) possible values of d relatively prime
to r, where OE is Euler's totient function [Knuth 1981, Hardy and Wright 1979, x5.5].
Each of these fractions d=r is close to one fraction c=q with There
are also r possible values for x k , since r is the order of x. Thus, there are rOE(r) states
ff which would enable us to obtain r. Since each of these states occurs
with probability at least 1=3r 2 , we obtain r with probability at least OE(r)=3r. Using
the theorem that OE(r)=r ? log r for some constant ffi [Hardy and Wright 1979,
Theorem 328], this shows that we find r at least a fraction of the time, so by
repeating this experiment only O(log log r) times, we are assured of a high probability
of success.
In practice, assuming that quantum computation is more expensive than classical
computation, it would be worthwhile to alter the above algorithm so as to perform less
quantum computation and more postprocessing. First, if the observed state is jci, it
would be wise to also try numbers close to c such as c \Sigma 1, c \Sigma since these also
have a reasonable chance of being close to a fraction qd=r. Second, if c=q - d=r, and
d and r have a common factor, it is likely to be small. Thus, if the observed value of
c=q is rounded off to d 0 =r 0 in lowest terms, for a candidate r one should consider not
only r 0 but also its small multiples 2r 0 , 3r 0 , . , to see if these are the actual order of x.
Although the first technique will only reduce the expected number of trials required to
find r by a constant factor, the second technique will reduce the expected number of
trials for the hardest n from O(log log n) to O(1) if the first (log n) 1+ffl multiples of r 0 are
considered [Odylzko 1995]. A third technique is, if two candidate r's have been found,
say r 1 and r 2 , to test the least common multiple of r 1 and r 2 as a candidate r. This third
technique is also able to reduce the expected number of trials to a constant [Knill 1995],
and will also work in some cases where the first two techniques fail.
Note that in this algorithm for determining the order of an element, we did not use
many of the properties of multiplication (mod n). In fact, if we have a permutation
f mapping the set f0; itself such that its kth iterate, f (k) (a), is
computable in time polynomial in log n and log k, the same algorithm will be able to
find the order of an element a under f , i.e., the minimum r such that f (r) (a) = a.
6 Discrete logarithms
For every prime p, the multiplicative group (mod p) is cyclic, that is, there are generators
g such that 1, g, g 2 , . , g p\Gamma2 comprise all the non-zero residues (mod p) [Hardy and
Wright 1979, Theorem 111, Knuth 1981]. Suppose we are given a prime p and such
P. W. SHOR
a generator g. The discrete logarithm of a number x with respect to p and g is the
integer r with p). The fastest algorithm known for
finding discrete logarithms modulo arbitrary primes p is Gordon's [1993] adaptation of
the number field sieve, which runs in time exp(O(log p) 1=3 (log log p) 2=3 )). We show how
to find discrete logarithms on a quantum computer with two modular exponentiations
and two quantum Fourier transforms.
This algorithm will use three quantum registers. We first find q a power of 2 such
that q is close to p, i.e., with Next, we put the first two registers in our
quantum computer in the uniform superposition of all jai and jbi (mod
compute g a x \Gammab (mod p) in the third register. This leaves our machine in the statep \Gamma 1
a x \Gammab (mod p)
As before, we use the Fourier transform A q to send jai ! jci and jbi ! jdi with
probability amplitude 1
q exp(2-i(ac+bd)=q). This is, we take the state ja; bi to the stateq
exp
This leaves our quantum computer in the state(p \Gamma 1)q
c;d=0
exp
a x \Gammab (mod p)
Finally, we observe the state of the quantum computer.
The probability of observing a state jc; d; yi with y j g k (mod p) is
a;b
a\Gammarbjk
exp
where the sum is over all (a; b) such that a \Gamma rb 1). Note that we now
have two moduli to deal with, q. While this makes keeping track of things
more confusing, it does not pose serious problems. We now use the relation
and substitute (6.5) in the expression (6.4) to obtain the amplitude on
ff ,
which
exp
The absolute value of the square of this amplitude is the probability of observing the
state
ff . We will now analyze the expression (6.6). First, a factor of
FACTORING WITH A QUANTUM COMPUTER 21
exp(2-ikc=q) can be taken out of all the terms and ignored, because it does not change
the probability. Next, we split the exponent into two parts and factor out b to obtain(p \Gamma 1)q
exp
exp
where
and
Here by fzg q we mean the residue of z (mod q) with \Gammaq=2 ! fzg q - q=2, as in equation
(5.7).
We next classify possible outputs (observed states) of the quantum computer into
"good" and "bad." We will show that if we get enough "good" outputs, then we will
likely be able to deduce r, and that furthermore, the chance of getting a "good" output
is constant. The idea is that if
where j is the closest integer to T=q, then as b varies between 0 and 2, the phase
of the first exponential term in equation (6.7) only varies over at most half of the unit
circle. Further, if
then jV j is always at most q=12, so the phase of the second exponential term in equation
(6.7) never is farther than exp(-i=6) from 1. If conditions (6.10) and (6.11) both hold,
we will say that an output is "good." We will show that if both conditions hold, then the
contribution to the probability from the corresponding term is significant. Furthermore,
both conditions will hold with constant probability, and a reasonable sample of c's for
which condition (6.10) holds will allow us to deduce r.
We now give a lower bound on the probability of each good output, i.e., an output
that satisfies conditions (6.10) and (6.11). We know that as b ranges from 0 to
the phase of exp(2-ibT=q) ranges from 0 to 2-iW where
and j is as in equation (6.10). Thus, the component of the amplitude of the first
exponential in the summand of (6.7) in the direction
is at least cos(2- jW=2 \Gamma W b=(p \Gamma 2)j). By condition (6.11), the phase can vary by at
most -i=6 due to the second exponential exp(2-iV=q). Applying this variation in the
manner that minimizes the component in the direction (6.13), we get that the component
in this direction is at least
22 P. W. SHOR
Thus we get that the absolute value of the amplitude (6.7) is at least(p \Gamma 1)q
cos
Replacing this sum with an integral, we get that the absolute value of this amplitude is
at leastq
From condition (6.10), jW j - 1, so the error term is O( 1
pq ). As W varies between \Gamma 1and 1, the integral (6.16) is minimized when jW 1. Thus, the probability of arriving
at a state jc; d; yi that satisfies both conditions (6.10) and (6.11) is at least
Z 2-=3
cos u du
or at least :054=q 2 ? 1=(20q 2 ).
We will now count the number of pairs (c; d) satisfying conditions (6.10) and (6.11).
The number of pairs (c; d) such that (6.10) holds is exactly the number of possible c's,
since for every c there is exactly one d such that (6.10) holds. Unless gcd(p \Gamma 1; q) is large,
the number of c's for which (6.11) holds is approximately q=6, and even if it is large,
this number is at least q=12. Thus, there are at least q=12 pairs (c; d) satisfying both
conditions. Multiplying by which is the number of possible y's, gives approximately
pq=12 good states jc; d; yi. Combining this calculation with the lower bound 1=(20q 2 ) on
the probability of observing each good state gives us that the probability of observing
some good state is at least p=(240q), or at least 1=480 (since q ! 2p). Note that each
good c has a probability of at least (p of being observed, since
there of y and one value of d with which c can make a good state jc; d; yi.
We now want to recover r from a pair c; d such that
(mod 1); (6.18)
where this equation was obtained from condition (6.10) by dividing by q. The first thing
to notice is that the multiplier on r is a fraction with denominator evenly
divides . Thus, we need only round d=q off to the nearest multiple
of by the integer
to find a candidate r. To show that the quantum calculation need only be repeated a
polynomial number of times to find the correct r requires only a few more details. The
problem is that we cannot divide by a number c 0 which is not relatively prime to p \Gamma 1.
For the discrete log algorithm, we do not know that all possible values of c 0 are
generated with reasonable likelihood; we only know this about one-twelfth of them.
This additional difficulty makes the next step harder than the corresponding step in the
FACTORING WITH A QUANTUM COMPUTER 23
algorithm for factoring. If we knew the remainder of r modulo all prime powers dividing
could use the Chinese remainder theorem to recover r in polynomial time. We
will only be able to prove that we can find this remainder for primes larger than 18, but
with a little extra work we will still be able to recover r.
Recall that each good (c; d) pair is generated with probability at least 1=(20q 2 ), and
that at least a twelfth of the possible c's are in a good (c; d) pair. From equation (6.19),
it follows that these c's are mapped from c=q to c 0 by rounding to the nearest
integral multiple of 1=(p \Gamma 1). Further, the good c's are exactly those in which c=q is
close to c 0 =(p \Gamma 1). Thus, each good c corresponds with exactly one c 0 . We would like
to show that for any prime power p ff i
dividing is unlikely to
contain p i . If we are willing to accept a large constant for our algorithm, we can just
ignore the prime powers under 18; if we know r modulo all prime powers over 18, we can
try all possible residues for primes under with only a (large) constant factor increase
in running time. Because at least one twelfth of the c's were in a good (c; d) pair, at
least one twelfth of the c 0 's are good. Thus, for a prime power p ff i
i , a random good c 0 is
divisible by p ff i
with probability at most 12=p ff i
. If we have t good c 0 's, the probability
of having a prime power over that divides all of them is therefore at most
where ajb means that a evenly divides b, so the sum is over all prime powers greater
than
goes down by at least a factor of 2=3 for each further increase of t by 1; thus for some
constant t it is less than 1=2.
Recall that each good c 0 is obtained with probability at least 1=(40q) from any
experiment. Since there are q=12 good c 0 's, after 480t experiments, we are likely to
obtain a sample of t good c 0 's chosen equally likely from all good c 0 's. Thus, we will be
able to find a set of c 0 's such that all prime powers p ff i
are relatively
prime to at least one of these c 0 's. To obtain a polynomial time algorithm, all one need
do is try all possible sets of c 0 's of size t; in practice, one would use an algorithm to
find sets of c 0 's with large common factors. This set gives the residue of r for all primes
larger than 18. For each prime p i less than 18, we have at most possibilities for the
residue modulo p ff i
is the exponent on prime p i in the prime factorization of
1. We can thus try all possibilities for residues modulo powers of primes less than 18:
for each possibility we can calculate the corresponding r using the Chinese remainder
theorem and then check to see whether it is the desired discrete logarithm.
If one were to actually program this algorithm there are many ways in which the
efficiency could be increased over the efficiency shown in this paper. For example, the
estimate for the number of good c 0 's is likely too low, especially since weaker conditions
than (6.10) and (6.11) should suffice. This means that the number of times the experiment
need be run could be reduced. It also seems improbable that the distribution of
bad values of c 0 would have any relationship to primes under 18; if this is true, we need
not treat small prime powers separately.
This algorithm does not use very many properties of Z p , so we can use the same
algorithm to find discrete logarithms over other fields such as Z p ff , as long as the field
P. W. SHOR
has a cyclic multiplicative group. All we need is that we know the order of the generator,
and that we can multiply and take inverses of elements in polynomial time. The order
of the generator could in fact be computed using the quantum order-finding algorithm
given in x5 of this paper. Boneh and Lipton [1995] have generalized the algorithm so as
to be able to find discrete logarithms when the group is abelian but not cyclic.
7 Comments and open problems
It is currently believed that the most difficult aspect of building an actual quantum
computer will be dealing with the problems of imprecision and decoherence. It was
shown by Bennett et al. [1994] that the quantum gates need only have precision O(1=t)
in order to have a reasonable probability of completing t steps of quantum computation;
that is, there is a c such that if the amplitudes in the unitary matrices representing the
quantum gates are all perturbed by at most c=t, the quantum computer will still have a
reasonable chance of producing the desired output. Similarly, the decoherence needs to
be only polynomially small in t in order to have a reasonable probability of completing t
steps of computation successfully. This holds not only for the simple model of decoherence
where each bit has a fixed probability of decohering at each time step, but also for
more complicated models of decoherence which are derived from fundamental quantum
mechanical considerations [Unruh 1995, Palma et al. 1995, Chuang et al. 1995]. How-
ever, building quantum computers with high enough precision and low enough decoherence
to accurately perform long computations may present formidable difficulties to
experimental physicists. In classical computers, error probabilities can be reduced not
only though hardware but also through software, by the use of redundancy and error-correcting
codes. The most obvious method of using redundancy in quantum computers
is ruled out by the theorem that quantum bits cannot be cloned [Peres 1993, x9-4],
but this argument does not rule out more complicated ways of reducing inaccuracy or
decoherence using software. In fact, some progress in the direction of reducing inaccuracy
[Berthiaume et al. 1994] and decoherence [Shor 1995] has already been made. The
result of Bennett et al. [1995] that quantum bits can be faithfully transmitted over a
noisy quantum channel gives further hope that quantum computations can similarly be
faithfully carried out using noisy quantum bits and noisy quantum gates.
Discrete logarithms and factoring are not in themselves widely useful problems. They
have only become useful because they have been found to be crucial for public-key cryp-
tography, and this application is in turn possible only because they have been presumed
to be difficult. This is also true of the generalizations of Boneh and Lipton [1995] of
these algorithms. If the only uses of quantum computation remain discrete logarithms
and factoring, it will likely become a special-purpose technique whose only raison d'-etre
is to thwart public key cryptosystems. However, there may be other hard problems
which could be solved asymptotically faster with quantum computers. In particular,
of interesting problems not known to be NP-complete, the problem of finding a short
vector in a lattice [Adleman 1994, Adleman and McCurley 1995] seems as if it might
potentially be amenable to solution by a quantum computer.
In the history of computer science, however, most important problems have turned
out to be either polynomial-time or NP-complete. Thus quantum computers will likely
not become widely useful unless they can solve NP-complete problems. Solving NP-
FACTORING WITH A QUANTUM COMPUTER 25
complete problems efficiently is a Holy Grail of theoretical computer science which very
few people expect to be possible on a classical computer. Finding polynomial-time
algorithms for solving these problems on a quantum computer would be a momentous
discovery. There are some weak indications that quantum computers are not powerful
enough to solve NP-complete problems [Bennett et al. 1994], but I do not believe that
this potentiality should be ruled out as yet.
Acknowledgements
I would like to thank Jeff Lagarias for finding and fixing a critical error in the first version
of the discrete log algorithm. I would also like to thank him, David Applegate, Charles
Bennett, Gilles Brassard, Andrew Odlyzko, Dan Simon, Bob Solovay, Umesh Vazirani,
and correspondents too numerous to list, for productive discussions, for corrections to
and improvements of early drafts of this paper, and for pointers to the literature.
--R
Algorithmic number theory-The complexity contribution
Open problems in number-theoretic complexity II
The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of computers as represented by Turing machines
Logical reversibility of computation
Time/space trade-offs for reversible computation
Strengths and weaknesses of quantum computing
Purification of noisy entanglement
Quantum complexity theory
The stabilisation of quantum computations
Can quantum computers have simple Hamiltonians
Quantum cryptanalysis of hidden linear functions
New lower bound techniques for robot motion planning problems
Quantum computers
A simple quantum computer
An unsolvable problem of elementary number theory
Quantum computations with cold trapped ions
A note on computing Fourier transforms by quantum programs
An approximate Fourier transform useful in quantum factoring
Quantum theory
Quantum computational networks
Universality of quantum computation
Rapid solution of problems by quantum computation
Shor's quantum algorithm for factorising numbers
Simulating physics with computers
Conservative logic
Discrete logarithms in GF(p) using the number field sieve
An Introduction to the Theory of Numbers
On the power of multiplication in random access machines
Multiplication of multidigit numbers on automata
The Art of Computer Programming
personal communication.
Machines de Turing r'eversibles.
A note on Bennett's time-space tradeoff for reversible computation
A potentially realizable quantum computer
Envisioning a quantum supercomputer
Almost any quantum logic gate is universal
Quantum computation
Parallel quantum computation
Riemann's hypothesis and tests for primality
personal communication.
Quantum computers and dissipation
Academic Press
Finite combinatory processes.
A method of obtaining digital signatures and public-key cryptosystems
Digital simulation of analog computation and Church's thesis
Algorithms for quantum computation: Discrete logarithms and factoring
Scheme for reducing decoherence in quantum memory
On the power of quantum computation
Realizable universal quantum logic gates
personal communication.
Two non-standard paradigms for computation: Analog machines and cellular automata
Structural basis of multistationary quantum systems II: Effective few-particle dynamics
Reversible computing
On computable numbers
Maintaining coherence in quantum computers
The complexity of analog computation
Quantum circuit complexity
--TR
--CTR
Lihua Liu , Zhengjun Cao, On computing ord
S.-J. Park , A. Persaud , J. A. Liddle , J. Nilsson , J. Bokor , D. H. Schneider , I. W. Rangelow , T. Schenkel, Processing issues in top-down approaches to quantum computer development in Silicon, Microelectronic Engineering, v.73-74 n.1, p.695-700, June 2004
Richard Jozsa, Quantum Factoring, Discrete Logarithms, and the Hidden Subgroup Problem, IEEE MultiMedia, v.3 n.2, p.34-43, March 1996
Andrew M. Steane , Eleanor G. Rieffel, Beyond Bits: The Future of Quantum Information Processing, Computer, v.33 n.1, p.38-45, January 2000
B. M. Terhal, Is entanglement monogamous?, IBM Journal of Research and Development, v.48 n.1, p.71-78, January 2004
Markus Hunziker , David A. Meyer, Quantum Algorithms for Highly Structured Search Problems, Quantum Information Processing, v.1 n.3, p.145-154, June 2002
Mark Ettinger , Peter Hyer , Emanuel Knill, The quantum query complexity of the hidden subgroup problem is polynomial, Information Processing Letters, v.91 n.1, p.43-48, July 2004
Christopher Wolf , An Braeken , Bart Preneel, On the security of stepwise triangular systems, Designs, Codes and Cryptography, v.40 n.3, p.285-302, September 2006
Richard J. Hughes , Colin P. Williams, Quantum Computing: The Final Frontier?, IEEE Intelligent Systems, v.15 n.5, p.10-18, September 2000
Byung-Soo Choi , Thomas A. Walker , Samuel L. Braunstein, Sure Success Partial Search, Quantum Information Processing, v.6 n.1, p.1-8, February 2007
Siddhartha Kasivajhula, Quantum computing: a survey, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
communications with an asymptotic secrecy model, Knowledge-Based Systems, v.20 n.5, p.478-484, June, 2007
IEEE Computer Graphics and Applications Staff, Quantum Computing, Part 3, IEEE Computer Graphics and Applications, v.21 n.6, p.72-82, November 2001
Leslie G. Valiant, Expressiveness of matchgates, Theoretical Computer Science, v.289 n.1, p.457-471, 23 October 2002
Takayuki Miyadera , Masanori Ohya, On Halting Process of Quantum Turing Machine, Open Systems & Information Dynamics, v.12 n.3, p.261-264, June 2005
George F. Viamontes , Igor L. Markov , John P. Hayes, High-Performance QuIDD-Based Simulation of Quantum Circuits, Proceedings of the conference on Design, automation and test in Europe, p.21354, February 16-20, 2004
Kazuo Ohta , Tetsuro Nishino , Seiya Okubo , Noboru Kunihiro, A quantum algorithm using NMR computers to break secret-key cryptosystems, New Generation Computing, v.21 n.4, p.347-361, April
Umesh Vazirani, Fourier transforms and quantum computation, Theoretical aspects of computer science: advanced lectures, Springer-Verlag New York, Inc., New York, NY, 2002
Dorit Aharonov , Andris Ambainis , Julia Kempe , Umesh Vazirani, Quantum walks on graphs, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.50-59, July 2001, Hersonissos, Greece
Andrew M. Steane , Eleanor G. Rieffel, Beyond Bits: The Future of Quantum Information Processing, Computer, v.33 n.1, p.38-45, January 2000
Sean Hallgren, Fast quantum algorithms for computing the unit group and class group of a number field, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Richard Cleve, The query complexity of order-finding, Information and Computation, v.192 n.2, p.162-171, August 1, 2004
Hales , Sean Hallgren, Quantum Fourier sampling simplified, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.330-338, May 01-04, 1999, Atlanta, Georgia, United States
Peter W. Shor, Why haven't more quantum algorithms been found?, Journal of the ACM (JACM), v.50 n.1, p.87-90, January
Takashi Mihara , Shao Chin Sung, Deterministic polynomial-time quantum algorithms for Simon's problem, Computational Complexity, v.12 n.3-4, p.162-175, September 2004
Dorit Aharonov , Alexei Kitaev , Noam Nisan, Quantum circuits with mixed states, Proceedings of the thirtieth annual ACM symposium on Theory of computing, p.20-30, May 24-26, 1998, Dallas, Texas, United States
Andrew Chi-Chih Yao, Classical physics and the Church--Turing Thesis, Journal of the ACM (JACM), v.50 n.1, p.100-105, January
Michelangelo Grigni , Leonard Schulman , Monica Vazirani , Umesh Vazirani, Quantum mechanical algorithms for the nonabelian hidden subgroup problem, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.68-74, July 2001, Hersonissos, Greece
Qian-Hong Wu , Bo Qin , Yu-Min Wang, Extended methodology of RS design and instances based on GIP, Journal of Computer Science and Technology, v.20 n.2, p.270-275, March 2005
Arthur Schmidt , Ulrich Vollmer, Polynomial time quantum algorithm for the computation of the unit group of a number field, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Kareem S. Aggour , Renee Guhde , Melvin K. Simmons , Michael J. Simon, Simulation and verification II: simulating quantum computing: quantum express, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Howard Barnum , Michael Saks, A lower bound on the quantum query complexity of read-once functions, Journal of Computer and System Sciences, v.69 n.2, p.244-258, September 2004
Daniel N. Rockmore, The FFT: An Algorithm the Whole Family Can Use, Computing in Science and Engineering, v.2 n.1, p.60-64, January 2000
Simon Perdrix, Quantum Patterns and Types for Entanglement and Separability, Electronic Notes in Theoretical Computer Science (ENTCS), 170, p.125-138, March, 2007
Akinori Kawachi , Hirotada Kobayashi , Takeshi Koshiba , Raymond H. Putra, Universal test for quantum one-way permutations, Theoretical Computer Science, v.345 n.2-3, p.370-385, 22 November 2005
Jens-Matthias Bohli , Rainer Steinwandt , Mara Isabel Vasco , Consuelo Martnez, Weak Keys in MST1, Designs, Codes and Cryptography, v.37 n.3, p.509-524, December 2005
Sean Hallgren, Polynomial-time quantum algorithms for Pell's equation and the principal ideal problem, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Paul Massey , John A. Clark , Susan A. Stepney, Human-Competitive Evolution of Quantum Computing Artefacts by Genetic Programming, Evolutionary Computation, v.14 n.1, p.21-40, March 2006
Paul Massey , John A. Clark , Susan Stepney, Evolution of a human-competitive quantum fourier transform algorithm using genetic programming, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
George F. Viamontes , Manoj Rajagopalan , Igor L. Markov , John P. Hayes, Gate-level simulation of quantum circuits, Proceedings of the conference on Asia South Pacific design automation, January 21-24, 2003, Kitakyushu, Japan
Sean Hallgren , Alexander Russell , Amnon Ta-Shma, Normal subgroup reconstruction and quantum computation using group representations, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.627-635, May 21-23, 2000, Portland, Oregon, United States
Juan A. Acebrn , Renato Spigler, Supercomputing applications to the numerical modeling of industrial and applied mathematics problems, The Journal of Supercomputing, v.40 n.1, p.67-80, April 2007
George F. Viamontes , Igor L. Markov , John P. Hayes, Improving Gate-Level Simulation of Quantum Circuits, Quantum Information Processing, v.2 n.5, p.347-380, October
Edith Hemaspaandra , Lane A. Hemaspaandra , Marius Zimand, Almost-everywhere superiority for Quantum polynomial time, Information and Computation, v.175 n.2, p.171-181, June 15, 2002
Andris Ambainis, Quantum lower bounds by quantum arguments, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.636-643, May 21-23, 2000, Portland, Oregon, United States
Tarsem S. Purewal, Jr., Revisiting a limit on efficient quantum computation, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
Ziv Bar-Yossef , T. S. Jayram , Iordanis Kerenidis, Exponential separation of quantum and classical one-way communication complexity, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
A. T. Vakhitov , O. N. Granichin , S. S. Sysoev, A randomized stochastic optimization algorithm: Its estimation accuracy, Automation and Remote Control, v.67 n.4, p.589-597, April 2006
Stephan Mertens, Computational Complexity for Physicists, Computing in Science and Engineering, v.4 n.3, p.31-47, May 2002
Vivek V. Shende , Stephen S. Bullock , Igor L. Markov, Synthesis of quantum logic circuits, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Hea Joung Kim , William H. Mangione-Smith, Factoring large numbers with programmable hardware, Proceedings of the 2000 ACM/SIGDA eighth international symposium on Field programmable gate arrays, p.41-48, February 10-11, 2000, Monterey, California, United States
Farid Ablayev , Aida Gainutdinova , Marek Karpinski , Cristopher Moore , Christopher Pollett, On the computational power of probabilistic and quantum branching program, Information and Computation, v.203 n.2, p.145-162, December 15, 2005
Pascal Koiran , Vincent Nesme , Natacha Portier, The quantum query complexity of the abelian hidden subgroup problem, Theoretical Computer Science, v.380 n.1-2, p.115-126, June, 2007
Katalin Friedl , Gbor Ivanyos , Miklos Santha, Efficient testing of groups, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Igor E. Shparlinski , Arne Winterhof, Quantum period reconstruction of approximate sequences, Information Processing Letters, v.103 n.6, p.211-215, September, 2007
Ran Raz, Exponential separation of quantum and classical communication complexity, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.358-367, May 01-04, 1999, Atlanta, Georgia, United States
Paul Vitnyi, Time, space, and energy in reversible computing, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Maciej Gowin, On the Complexity of Searching for a Maximum of a Function on a Quantum Computer, Quantum Information Processing, v.5 n.1, p.31-41, February 2006
Eli Biham , Gilles Brassard , Dan Kenigsberg , Tal Mor, Quantum computing without entanglement, Theoretical Computer Science, v.320 n.1, p.15-33, 12 June 2004
Katalin Friedl , Gbor Ivanyos , Frdric Magniez , Miklos Santha , Pranab Sen, Hidden translation and orbit coset in quantum computing, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Harumichi Nishimura , Masanao Ozawa, Computational complexity of uniform quantum circuit families and quantum Turing machines, Theoretical Computer Science, v.276 n.1-2, p.147-181, April 6, 2002
Andris Ambainis , John Watrous, Two-way finite automata with quantum and classical states, Theoretical Computer Science, v.287 n.1, p.299-311, 25 September 2002
Takashi Mihara, Splitting information securely with entanglement, Information and Computation, v.187 n.1, p.110-122, November 25,
Cristopher Moore , Daniel Rockmore , Alexander Russell , Leonard J. Schulman, The power of basis selection in fourier sampling: hidden subgroup problems in affine groups, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Nick Papanikolaou, An introduction to quantum cryptography, Crossroads, v.11 n.3, p.3-3, Spring 2005
Takashi Mihara, Quantum protocols for untrusted computations, Journal of Discrete Algorithms, v.5 n.1, p.65-72, March, 2007
Sean Hallgren, Polynomial-time quantum algorithms for Pell's equation and the principal ideal problem, Journal of the ACM (JACM), v.54 n.1, p.1-es, March 2007
A. Papageorgiou , H. Woniakowski, The Sturm-Liouville Eigenvalue Problem and NP-Complete Problems in the Quantum Setting with Queries, Quantum Information Processing, v.6 n.2, p.101-120, April 2007
Debajyoti Bera , Frederic Green , Steven Homer, Small depth quantum circuits, ACM SIGACT News, v.38 n.2, June 2007
Andris Ambainis, Quantum lower bounds by quantum arguments, Journal of Computer and System Sciences, v.64 n.4, p.750-767, June 2002
Eli Biham , Michel Boyer , P. Oscar Boykin , Tal Mor , Vwani Roychowdhury, A proof of the security of quantum key distribution (extended abstract), Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.715-724, May 21-23, 2000, Portland, Oregon, United States
Peter Hoyer , Troy Lee , Robert Spalek, Negative weights make adversaries stronger, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Michele Mosca, Counting by quantum eigenvalue estimation, Theoretical Computer Science, v.264 n.1, p.139-153, 08/06/2001
Harry Buhrman , Lance Fortnow , Ilan Newman , Hein Rhrig, Quantum property testing, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Scott Aaronson , Yaoyun Shi, Quantum lower bounds for the collision and the element distinctness problems, Journal of the ACM (JACM), v.51 n.4, p.595-605, July 2004
Evgeny Dantsin , Vladik Kreinovich , Alexander Wolpert, On quantum versions of record-breaking algorithms for SAT, ACM SIGACT News, v.36 n.4, p.103-108, December 2005
Esma Ameur , Gilles Brassard , Sbastien Gambs, Quantum clustering algorithms, Proceedings of the 24th international conference on Machine learning, p.1-8, June 20-24, 2007, Corvalis, Oregon
A. Papageorgiou , H. Woniakowski, Classical and Quantum Complexity of the Sturm--Liouville Eigenvalue Problem, Quantum Information Processing, v.4 n.2, p.87-127, June 2005
Harry Buhrman , Richard Cleve , Avi Wigderson, Quantum vs. classical communication and computation, Proceedings of the thirtieth annual ACM symposium on Theory of computing, p.63-68, May 24-26, 1998, Dallas, Texas, United States
Andris Ambainis, A new protocol and lower bounds for quantum coin flipping, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.134-142, July 2001, Hersonissos, Greece
Cristopher Moore , Daniel Rockmore , Alexander Russell, Generic quantum Fourier transforms, ACM Transactions on Algorithms (TALG), v.2 n.4, p.707-723, October 2006
Scott Aaronson, Quantum lower bound for the collision problem, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Andrew M. Childs , Richard Cleve , Enrico Deotto , Edward Farhi , Sam Gutmann , Daniel A. Spielman, Exponential algorithmic speedup by a quantum walk, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Cristopher Moore , Daniel Rockmore , Alexander Russell, Generic quantum Fourier transforms, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Amr Sabry, Modeling quantum computing in Haskell, Proceedings of the ACM SIGPLAN workshop on Haskell, p.39-49, August 28-28, 2003, Uppsala, Sweden
A. Lyon , Margaret Martonosi, Tailoring quantum architectures to implementation style: a quantum computer for mobile and persistent qubits, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Shengyu Zhang, On the power of Ambainis lower bounds, Theoretical Computer Science, v.339 n.2, p.241-256, 12 June 2005
Gbor Ivanyos , Frdric Magniez , Miklos Santha, Efficient quantum algorithms for some instances of the non-Abelian hidden subgroup problem, Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures, p.263-270, July 2001, Crete Island, Greece
Licheng Wang , Zhenfu Cao , Peng Zeng , Xiangxue Li, One-more matching conjugate problem and security of braid-based signatures, Proceedings of the 2nd ACM symposium on Information, computer and communications security, March 20-22, 2007, Singapore
Leslie G. Valiant, Quantum computers that can be simulated classically in polynomial time, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.114-123, July 2001, Hersonissos, Greece
Evgeny Dantsin , Alexander Wolpert , Vladik Kreinovich, Quantum versions of k-CSP algorithms: a first step towards quantum algorithms for interval-related constraint satisfaction problems, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
John Watrous, Zero-knowledge against quantum attacks, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Alberto Bertoni , Carlo Mereghetti , Beatrice Palano, Some formal tools for analyzing quantum automata, Theoretical Computer Science, v.356 n.1, p.14-25, 5 May 2006
Oded Regev, New lattice-based cryptographic constructions, Journal of the ACM (JACM), v.51 n.6, p.899-942, November 2004
Harumichi Nishimura , Masanao Ozawa, Uniformity of quantum circuit families for error-free algorithms, Theoretical Computer Science, v.332 n.1-3, p.487-496, 28 February 2005
Lance Fortnow, One complexity theorist's view of quantum computing, Theoretical Computer Science, v.292 n.3, p.597-610, 31 January
Reihaneh Safavi-Naini , Shuhong Wang , Yvo Desmedt, Unconditionally secure ring authentication, Proceedings of the 2nd ACM symposium on Information, computer and communications security, March 20-22, 2007, Singapore
Alexei Kitaev , John Watrous, Parallelization, amplification, and exponential time simulation of quantum interactive proof systems, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.608-617, May 21-23, 2000, Portland, Oregon, United States
Andris Ambainis, Polynomial degree vs. quantum query complexity, Journal of Computer and System Sciences, v.72 n.2, p.220-238, March 2006
John Watrous, PSPACE has constant-round quantum interactive proof systems, Theoretical Computer Science, v.292 n.3, p.575-588, 31 January
Marcello Frixione, Tractable Competence, Minds and Machines, v.11 n.3, p.379-397, August 2001
Andris Ambainis, A new protocol and lower bounds for quantum coin flipping, Journal of Computer and System Sciences, v.68 n.2, p.398-416, March 2004
H. Woniakowski, The Quantum Setting with Randomized Queries for Continuous Problems, Quantum Information Processing, v.5 n.2, p.83-130, April 2006
Dmitry Gavinsky , Julia Kempe , Iordanis Kerenidis , Ran Raz , Ronald de Wolf, Exponential separations for one-way quantum communication complexity, with applications to cryptography, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Hirotada Kobayashi , Keiji Matsumoto, Quantum multi-prover interactive proof systems with limited prior entanglement, Journal of Computer and System Sciences, v.66 n.3, p.429-450, May
V. Arvind , Piyush P. Kurur, Graph isomorphism is in SPP, Information and Computation, v.204 n.5, p.835-852, May 2006
van Dam , Sean Hallgren , Lawrence Ip, Quantum algorithms for some hidden shift problems, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Nemanja Isailovic , Mark Whitney , Yatish Patel , John Kubiatowicz , Dean Copsey , Frederic T. Chong , Isaac L. Chuang , Mark Oskin, Datapath and control for quantum wires, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.1, p.34-61, March 2004
Dorit Aharonov , Vaughan Jones , Zeph Landau, A polynomial quantum algorithm for approximating the Jones polynomial, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
van Dam , Frdic Magniez , Michele Mosca , Miklos Santha, Self-testing of universal and fault-tolerant sets of quantum gates, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.688-696, May 21-23, 2000, Portland, Oregon, United States
Tien D. Kieu, Quantum Hypercomputation, Minds and Machines, v.12 n.4, p.541-561, November 2002
Hartmut Klauck, On quantum and probabilistic communication: Las Vegas and one-way protocols, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.644-651, May 21-23, 2000, Portland, Oregon, United States
Martin Sauerhoff , Detlef Sieling, Quantum branching programs and space-bounded nonuniform quantum complexity, Theoretical Computer Science, v.334 n.1-3, p.177-225, 11 April 2005
A. Ambainis, Quantum search algorithms, ACM SIGACT News, v.35 n.2, June 2004
Tatjana Curcic , Mark E. Filipkowski , Almadena Chtchelkanova , Philip A. D'Ambrosio , Stuart A. Wolf , Michael Foster , Douglas Cochran, Quantum networks: from quantum cryptography to quantum architecture, ACM SIGCOMM Computer Communication Review, v.34 n.5, October 2004
John Watrous, Quantum algorithms for solvable groups, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.60-67, July 2001, Hersonissos, Greece
Scott Aaronson, Multilinear formulas and skepticism of quantum computing, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Yaoyun Shi, Tensor norms and the classical communication complexity of nonlocal quantum measurement, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Sean Hallgren , Cristopher Moore , Martin Rtteler , Alexander Russell , Pranab Sen, Limitations of quantum coset states for graph isomorphism, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Dorit Aharonov , Amnon Ta-Shma, Adiabatic quantum state generation and statistical zero knowledge, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Mark Oskin , Frederic T. Chong , Isaac L. Chuang , John Kubiatowicz, Building quantum wires: the long and the short of it, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Holger Spakowski , Mayur Thakur , Rahul Tripathi, Quantum and classical complexity classes: separations, collapses, and closure properties, Information and Computation, v.200 n.1, p.1-34, 1 July 2005
An introduction to quantum computing for non-physicists, ACM Computing Surveys (CSUR), v.32 n.3, p.300-335, Sept. 2000
Robert Beals , Harry Buhrman , Richard Cleve , Michele Mosca , Ronald de Wolf, Quantum lower bounds by polynomials, Journal of the ACM (JACM), v.48 n.4, p.778-797, July 2001
R. Srikanth, A Computational Model for Quantum Measurement, Quantum Information Processing, v.2 n.3, p.153-199, June
Dagmar Bruss , Gbor Erdlyi , Tim Meyer , Tobias Riege , Jrg Rothe, Quantum cryptography: A survey, ACM Computing Surveys (CSUR), v.39 n.2, p.6-es, 2007
Scott Aaronson, Guest Column: NP-complete problems and physical reality, ACM SIGACT News, v.36 n.1, March 2005
Andrew Odlyzko, Discrete Logarithms: The Past and the Future, Designs, Codes and Cryptography, v.19 n.2-3, p.129-145, March 2000
David S. Johnson, The NP-completeness column, ACM Transactions on Algorithms (TALG), v.1 n.1, p.160-176, July 2005
Jrg Rothe, Some facets of complexity theory and cryptography: A five-lecture tutorial, ACM Computing Surveys (CSUR), v.34 n.4, p.504-549, December 2002
Rodney Van Meter , Mark Oskin, Architectural implications of quantum computing technologies, ACM Journal on Emerging Technologies in Computing Systems (JETC), v.2 n.1, p.31-63, January 2006 | spin systems;quantum computers;algorithmic number theory;church's thesis;fourier transforms;foundations of quantum mechanics;prime factorization;discrete logarithms |
264407 | Strengths and Weaknesses of Quantum Computing. | Recently a great deal of attention has been focused on quantum computation following a sequence of results [Bernstein and Vazirani, in Proc. 25th Annual ACM Symposium Theory Comput., 1993, pp. 11--20, SIAM J. Comput., 26 (1997), pp. 1277--1339], [Simon, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 116--123, SIAM J. Comput., 26 (1997), pp. 1340--1349], [Shor, in Proc. 35th Annual IEEE Symposium Foundations Comput. Sci., 1994, pp. 124--134] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of $\NP$ can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random with probability 1 the class $\NP$ cannot be solved on a quantum Turing machine (QTM) in time $o(2^{n/2})$. We also show that relative to a permutation oracle chosen uniformly at random with probability 1 the class $\NP \cap \coNP$ cannot be solved on a QTM in time $o(2^{n/3})$. The former bound is tight since recent work of Grover [in {\it Proc.\ $28$th Annual ACM Symposium Theory Comput.}, 1996] shows how to accept the class $\NP$ relative to any oracle on a quantum computer in time $O(2^{n/2})$. | Introduction
Quantum computational complexity is an exciting new area that touches upon the foundations
of both theoretical computer science and quantum physics. In the early eighties,
Feynman [12] pointed out that straightforward simulations of quantum mechanics on a classical
computer appear to require a simulation overhead that is exponential in the size of
the system and the simulated time; he asked whether this is inherent, and whether it is
possible to design a universal quantum computer. Deutsch [9] defined a general model of
quantum computation - the quantum Turing machine. Bernstein and Vazirani [4] proved
that there is an efficient universal quantum Turing machine. Yao [17] extended this by
proving that quantum circuits (introduced by Deutsch [10]) are polynomially equivalent to
quantum Turing machines.
The computational power of quantum Turing machines (QTMs) has been explored by
several researchers. Early work by Deutsch and Jozsa [11] showed how to exploit some
inherently quantum mechanical features of QTMs. Their results, in conjunction with subsequent
results by Berthiaume and Brassard [5, 6], established the existence of oracles under
which there are computational problems that QTMs can solve in polynomial time with cer-
tainty, whereas if we require a classical probabilistic Turing machine to produce the correct
answer with certainty, then it must take exponential time on some inputs. On the other
hand, these computational problems are in BPP 1 relative to the same oracle, and therefore
efficiently solvable in the classical sense. The quantum analogue of the class BPP is
1 BPP is the class of decision problems (languages) that can be solved in polynomial time by probabilistic
Turing machines with error probability bounded by 1/3 (for all inputs). Using standard boosting techniques,
the error probability can then be made exponentially small in k by iterating the algorithm k times and
returning the majority answer.
the class BQP 2 [5]. Bernstein and Vazirani [4] proved that BPP ' BQP ' PSPACE,
thus establishing that it will not be possible to conclusively prove that BQP 6= BPP
without resolving the major open problem P ?
PSPACE. They also gave the first evidence
that BQP 6= BPP (polynomial-time quantum Turing machines are more powerful
than polynomial-time probabilistic Turing machines), by proving the existence of an oracle
relative to which there are problems in BQP that cannot be solved with small error
probability by probabilistic machines restricted to running in n o(log n) steps. Since BPP is
regarded as the class of all "efficiently computable" languages (computational problems),
this provided evidence that quantum computers are inherently more powerful than classical
computers in a model-independent way. Simon [16] strengthened this evidence by proving
the existence of an oracle relative to which BQP cannot even be simulated by probabilistic
machines allowed to run for 2 n=2 steps. In addition, Simon's paper also introduced an
important new technique which was one of the ingredients in a remarkable result proved
subsequently by Shor [15]. Shor gave polynomial-time quantum algorithms for the factoring
and discrete logarithm problems. These two problems have been well-studied, and
their presumed intractability forms the basis of much of modern cryptography. In view of
these results, it is natural to ask whether NP ' BQP; i.e. can quantum computers solve
NP-complete problems in polynomial time? 3
In this paper, we address this question by proving that relative to an oracle chosen
uniformly at random [3], with probability 1, the class NP cannot be solved on a quantum
2 BQP is the class of decision problems (languages) that can be solved in polynomial time by quantum
Turing machines with error probability bounded by 1/3 (for all inputs)-see [4] for a formal definition.
We prove in Section 4 of this paper that, as is the case with BPP, the error probability of BQP machines
can be made exponentially small.
3 Actually it is not even clear whether BQP ' BPP NP ; i.e. it is unclear whether nondeterminism
together with randomness is sufficient to simulate quantum Turing machines. In fact, Bernstein and Vazi-
rani's [4] result is stronger than stated above. They actually proved that relative to an oracle, the recursive
Fourier sampling problem can be solved in BQP, but cannot even be solved by Arthur-Merlin games [1]
with a time bound of n o(log n) (thus giving evidence that nondeterminism on top of probabilism does not
help). They conjecture that the recursive Fourier sampling cannot even be solved in the unrelativized
polynomial-time hierarchy.
Turing machine in time o(2 n=2 ). We also show that relative to a permutation oracle chosen
uniformly at random, with probability 1, the class NP " co-NP cannot be solved on a
quantum Turing machine in time o(2 n=3 ). The former bound is tight since recent work of
Grover [13] shows how to accept the class NP relative to any oracle on a quantum computer
in time O(2 n=2 ). See [7] for a detailed analysis of Grover's algorithm.
What is the relevance of these oracle results? We should emphasize that they do not
rule out the possibility that NP ' BQP. What these results do establish is that there is
no black-box approach to solving NP-complete problems by using some uniquely quantum-mechanical
features of QTMs. That this was a real possibility is clear from Grover's [13]
result, which gives a black-box approach to solving NP-complete problems in square-root
as much time as is required classically.
One way to think of an oracle is as a special subroutine call whose invocation only costs
unit time. In the context of QTMs, subroutine calls pose a special problem that has no
classical counterpart. The problem is that the subroutine must not leave around any bits
beyond its computed answer, because otherwise computational paths with different residual
information do not interfere. This is easily achieved for deterministic subroutines since any
classical deterministic computation can be carried out reversibly so that only the input and
the answer remain. However, this leaves open the more general question of whether a BQP
machine can be used as a subroutine. Our final result in this paper is to show how any
BQP machine can be modified into a tidy BQP machine whose final superposition consists
almost entirely of a tape configuration containing just the input and the single bit answer.
Since these tidy BQP machines can be safely used as subroutines, this allows us to show
that BQP BQP. The result also justifies the definition of oracle quantum machines
that we now give.
Oracle Quantum Turing Machines
In this section and the next, we shall assume without loss of generality that the Turing
machine alphabet (for each track or tape) is f0; 1; #g, where "#" denotes the blank symbol.
Initially all tapes are blank except that the input tape contains the actual input surrounded
by blanks. We shall use \Sigma to denote f0; 1g.
In the classical setting, an oracle may be described informally as a device for evaluating
some Boolean function A : \Sigma ! \Sigma, on arbitrary arguments, at unit cost per evaluation.
This allows to formulate questions such as "if A were efficiently computable by a Turing
machine, which other functions (or languages) could be efficiently computed by Turing
machines?". In the quantum setting, an equivalent question can be asked, provided we
define oracle quantum Turing machines appropriately-which we do in this section-and
Turing machines can be composed-which we show in
Section 4 of this paper.
An oracle QTM has a special query tape (or track), all of whose cells are blank except for
a single block of non-blank cells. In a well-formed oracle QTM, the Turing machine rules may
allow this region to grow and shrink, but prevent it from fragmenting into non-contiguous
blocks. 4 Oracle QTMs have two distinguished internal states: a pre-query state q q and a
post-query state q a . A query is executed whenever the machine enters the pre-query state.
If the query string is empty, a no-op occurs, and the machine passes directly to the post-
query state with no change. If the query string is nonempty, it can be written in the form
denotes concatenation. In that case, the result of a
call on oracle A is that internal control passes to the post-query state while the contents of
4 This restriction can be made without loss of generality and it can be verified syntactically by allowing
only machines that make sure they do not break the rule before writing on the query tape.
the query tape changes from jx ffi bi to jx ffi (b \Phi A(x))i, where "\Phi" denotes the exclusive-or
(addition modulo 2). Except for the query tape and internal control, other parts of the
oracle QTM do not change during the query. If the target bit jbi is supplied in initial state
j0i, then its final state will be jA(x)i, just as in a classical oracle machine. Conversely, if
the target bit is already in state jA(x)i, calling the oracle will reset it to j0i. This ability
to "uncompute" will often prove essential to allow proper interference among computation
paths to take place. Using this fact, it is also easy to see that the above definition of oracle
Turing machines yields unitary evolutions if we restrict ourselves to machines that are well-formed
in other respects, in particular evolving unitarily as they enter the pre-query state
and leave the post-query state.
The power of quantum computers comes from their ability to follow a coherent superposition
of computation paths. Similarly oracle quantum machines derive great power
from the ability to perform superpositions of queries. For example, oracle A might be
called when the query tape is in state j/ ffi
x ff x jx ffi 0i, where ff x are complex coef-
ficients, corresponding to an arbitrary superposition of queries with a constant j0i in the
target bit. In this case, after the query, the query string will be left in the entangled state
x ff x jx ffi A(x)i. It is also useful to be able to put the target bit b into a superposition.
For example, the conditional phase inversion used in Grover's algorithm can be achieved by
performing queries with the target bit b in the nonclassical superposition
2.
It can readily be verified that an oracle call with the query tape in state x ffi fi leaves the
entire machine state, including the query tape, unchanged if leaves the entire
state unchanged while introducing a phase factor \Gamma1 if
It is often convenient to think of a Boolean oracle as defining a length-preserving function
on \Sigma . This is easily accomplished by interpreting the oracle answer on the pair (x; i) as
the i th bit of the function value. The pair (x; i) is encoded as a binary string using any
standard pairing function. A permutation oracle is an oracle which, when interpreted as
a length-preserving function, acts for each n - 0 as a permutation on \Sigma n . Henceforth,
when no confusion may arise, we shall use A(x) to denote the length-preserving function
associated with oracle A rather than the Boolean function that gives rise to it.
Let us define BQTime(T (n)) A as the sets of languages accepted with probability at
least 2=3 by some oracle QTM M A whose running time is bounded by T (n). This bound
on the running time applies to each individual input, not just on the average. Notice that
whether or not M A is a BQP-machine might depend upon the oracle A-thus M A might
be a BQP-machine while M B might not be one.
Note: The above definition of a quantum oracle for an arbitrary Boolean function will
suffice for the purposes of the present paper, but the ability of quantum computers to perform
general unitary transformations suggests a broader definition, which may be useful in
other contexts. For example, oracles that perform more general, non-Boolean unitary operations
have been considered in computational learning theory [8] and for hiding information
against classical queries [14].
Most broadly, a quantum oracle may be defined as a device that, when called, applies
a fixed unitary transformation U to the current contents jzi of the query tape, replacing it
by U jzi. Such an oracle U must be defined on a countably infinite-dimensional Hilbert space,
such as that spanned by the binary basis vectors jffli; j0i; j1i; j00i; j01i; j10i;
where ffl denotes the empty string. Clearly, the use of such general unitary oracles still
yields unitary evolution for well-formed oracle Turing machines. Naturally, these oracles
can map inputs onto superpositions of outputs, and vice versa, and they need not even be
length-preserving. However, in order to obey the dictum that a single machine cycle ought
not to make infinite changes in the tape, one might require that U jzi have amplitude zero
on all but finitely many basis vectors. (One could even insist on a uniform and effective
version of the above restriction.) Another natural restriction one may wish to impose upon
U is that it be an involution, U so that the effect of an oracle call can be undone by
a further call on the same oracle. Again this may be crucial to allow proper interference to
take place. Note that the special case of unitary transformation considered in this paper,
which corresponds to evaluating a classical Boolean function, is an involution.
3 On the Difficulty of Simulating Nondeterminism on
QTMs
The computational power of QTMs lies in their ability to maintain and compute with
exponentially large superpositions. It is tempting to try to use this "exponential parallelism"
to simulate non-determinism. However, there are inherent constraints on the scope of this
parallelism, which are imposed by the formalism of quantum mechanics. 5 In this section,
we explore some of these constraints.
To see why quantum interference can speed up NP problems quadratically but not
exponentially, consider the problem of distinguishing the empty oracle
an oracle containing a single random unknown string y of known length n (i.e. A(y)=1, but
8 x6=y A(x)=0). We require that the computer never answer yes on an empty oracle, and seek
to maximize its "success probability" of answering yes on a nonempty oracle. A classical
computer can do no better than to query distinct n-bit strings at random, giving a success
probability 1=2 n after one query and k=2 n after k queries. How can a quantum computer do
5 There is a superficial similarity between this exponential parallelism in quantum computation and the
fact that probabilistic computations yield probability distributions over exponentially large domains. The
difference is that in the probabilistic case, the computational path is chosen by making a sequence of random
choices-one for each step. In the quantum-mechanical case, it is possible for several computational paths
to interfere destructively, and therefore it is necessary to keep track of the entire superposition at each step
to accurately simulate the system.
better, while respecting the rule that its overall evolution be unitary, and, in a computation
with a nonempty oracle, all computation paths querying empty locations evolve exactly as
they would for an empty oracle? A direct quantum analog of the classical algorithm would
start in an equally-weighted superposition of 2 n computation paths, query a different string
on each path, and finally collapse the superposition by asking whether the query had found
the nonempty location. This yields a success probability 1=2 n , the same as the classical
computer. However, this is not the best way to exploit quantum parallelism. Our goal
should be to maximize the separation between the state vector j/ k i after k interactions
with an empty oracle, and the state vector j/ k (y)i after k interactions with an oracle
nonempty at an unknown location y. Starting with a uniform superposition
x
it is easily seen that the separation after one query is maximized by a unitary evolution to
x
This is a phase inversion of the term corresponding to the nonempty location. By testing
whether the post-query state agrees with j/ 0 i we obtain a success probability
approximately four times better than the classical value. Thus, if we are allowed only one
query, quantum parallelism gives a modest improvement, but is still overwhelmingly likely
to fail because the state vector after interaction with a nonempty oracle is almost the same
as after interaction with an empty oracle. The only way of producing a large difference after
one query would be to concentrate much of the initial superposition in the y term before
the query, which cannot be done because that location is unknown.
Having achieved the maximum separation after one query, how best can that separation
be increased by subsequent queries? Various strategies can be imagined, but a good one
(called "inversion about the average" by Grover [13]) is to perform an oracle-independent
unitary transformation so as to change the phase difference into an amplitude difference,
leaving the y term with the same sign as all the other terms but a magnitude approximately
threefold larger. Subsequent phase-inverting interactions with the oracle, alternating with
oracle-independent phase-to-amplitude conversions, cause the distance between j/ 0 i and
j/ k (y)i to grow linearly with k, approximately as 2k=
N=2. This results in a
quadratic growth of the success probability, approximately as 4k 2 =2 n for small k. The proof
of Theorem 3.5 shows that this approach is essentially optimal: no quantum algorithm can
gain more than this quadratic factor in success probability compared to classical algorithms,
when attempting to answer NP-type questions formulated relative to a random oracle.
3.1 Lower Bounds on Quantum Search
We will sometimes find it convenient to measure the accuracy of a simulation by calculating
the Euclidean distance 6 between the target and simulation superpositions. The following
theorem from [4] shows that the simulation accuracy is at most 4 times worse than this
Euclidean distance.
Theorem 3.1 If two unit-length superpositions are within Euclidean distance " then observing
the two superpositions gives samples from distributions which are within total variation
distance 7 at most 4".
6 The Euclidean distance between
x
ff x jxi and
x
fijxi is defined as (
x
7 The total variation distance between two distributions D and D 0 is
x
Definition 3.2 Let jOE i i be the superposition of M A on input x at time i. We denote by
the sum of squared magnitudes in jOE i i of configurations of M which are querying
the oracle on string y. We refer to q y (jOE i i) as the query magnitude of y in jOE i i.
Theorem 3.3 Let jOE i i be the superposition of M A on input x at time i.
be a set of time-strings pairs such that
T . Now
suppose the answer to each query (i; y) 2 F is modified to some arbitrary fixed a i;y (these
answers need not be consistent with an oracle). Let jOE 0
i be the time i superposition of M
on input x with oracle A modified as stated above. Then jjOE
".
Proof. Let U be the unitary time evolution operator of M A . Let A i denote an oracle such
that if (i; y) 2 F then A i
be the
unitary time evolution operator of M A i . Let jOE i i be the superposition of M A on input x
at time i. We define jE i i to be the error in the i th step caused by replacing the oracle A
with A i . Then
So we have
Since all of the U i are unitary, jU T
The sum of squared magnitudes of all of the E i is equal to
(i;y)2F q y (jOE i i) and therefore
at most " 2
In the worst case, the U T could interfere constructively; however,
the squared magnitude of their sum is at most T times the sum of their squared magnitudes,
Corollary 3.4 Let A be an oracle over alphabet \Sigma. For y 2 \Sigma , let A y be any oracle such
that 8x 6= y A y be the time i superposition of M A on input x and
be the time i superposition of M Ay on input x. Then for every " ? 0, there is a set
S of cardinality at most 2T 2
Proof. Since each jOE t i has unit length,
y q y (jOE i i) - T . Let S be the set of strings
y such that
.
. Therefore by Theorem 3.3
".Theorem 3.5 For any T (n) which is o(2 n=2 ), relative to a random oracle, with probability
does not contain NP.
Proof. Recall from Section 2 that an oracle can be thought of as a length-preserving
function: this is what we mean below by A(x). Let yg. Clearly, this
language is contained in NP A . Let T We show that for any bounded-error
oracle QTM M A running in time at most T (n), with probability 1, M A does not accept
the language LA . The probability is taken over the choice of a random length-preserving
oracle A. Then, since there are a countable number of QTMs and the intersection of a
countable number of probability 1 events still has probability 1, we conclude that with
probability 1, no bounded error oracle QTM accepts LA in time bounded by T (n).
pick n large enough so that T (n) - 2 n=2. We will show
that the probability that M gives the wrong answer on input 1 n is at least 1=8 for every
way of fixing the oracle answers on inputs of length not equal to n. The probability is taken
over the random choices of the oracle for inputs of length n.
Let us fix an arbitrary length-preserving function from strings of lengths other than n
over alphabet \Sigma. Let C denote the set of oracles consistent with this arbitrary function.
Let A be the set of oracles in C such that 1 n has no inverse (does not belong to LA ). If the
oracle answers to length n strings are chosen uniformly at random, then the probability
that the oracle is in A is at least 1=4. This is because the probability that 1 n has no inverse
is
which is at least 1=4 (for n sufficiently large). Let B be the set of oracles in C
such that 1 n has a unique inverse. As above, the probability that a randomly chosen oracle
is in B is (
which is at least 1=e.
Given an oracle A in A, we can modify its answer on any single input, say y, to 1 n and
therefore get an oracle A y in B. We will show that for most choices of y, the acceptance
probability of M A on input 1 n is almost equal to the acceptance probability of M Ay on
input 1 n . On the other hand, M A must reject 1 n and M Ay must accept 1 n . Therefore M
cannot accept both LA and LAy . By working through the details more carefully, it is easy
to show that M fails on input 1 n with probability at least 1=8 when the oracle is a uniformly
random function on strings of length n, and is an arbitrary function on all other strings.
Let A y be the oracle such that A y
3.4 there is a set S of at most 338T 2 (n) strings such that the difference between the
th superposition of M Ay on input 1 n and M A on input 1 n has norm at most 1=13. Using
Theorem 3.1 we can conclude that the difference between the acceptance probabilities of
M Ay on input 1 n and M A on input 1 n is at most 1=13 \Theta 4 ! 1=3. Since M Ay should accept
with probability at least 2=3 and M A should reject 1 n with probability at least 2=3, we
can conclude that M fails to accept either LA or LAy .
So, each oracle A 2 A for which M correctly decides whether 1 n 2 LA can, by changing
a single answer of A to 1 n , be mapped to at least (2 different oracles
which M fails to correctly decide whether 1 n 2 LA f
. Moreover, any particular
is the image under this mapping of at most 2 since where it
now answers 1 n , it must have given one of the possible answers. Therefore, the
number of oracles in B for which M fails must be at least 1=2 the number of oracles in A
for which M succeeds. So, calling a the number of oracles in A for which M fails, M must
fail for at least a Therefore M fails to correctly decide whether
with probability at least (1=2)P [A] - 1=8.
It is easy to conclude that M decides membership in LA with probability 0 for a uniformly
chosen oracle A. 2
Note: Theorem 3.3 and its Corollary 3.4 isolate the constraints on "quantum parallelism"
imposed by unitary evolution. The rest of the proof of the above theorem is similar in spirit
to standard techniques used to separate BPP from NP relative to a random oracle [3].
For example, these techniques can be used to show that, relative to a random oracle A,
no classical probabilistic machine can recognize LA in time o(2 n ). However, quantum machines
can recognize this language quadratically faster, in time O(
using Grover's
algorithm [13]. This explains why a substantial modification of the standard technique was
required to prove the above theorem.
The next result about NP " co-NP relative to a random permutation oracle requires a
more subtle argument; ideally we would like to apply Theorem 3.3 after asserting that the
total query magnitude with which A \Gamma1 (1 n ) is probed is small. However, this is precisely
what we are trying to prove in the first place.
Theorem 3.6 For any T (n) which is o(2 n=3 ), relative to a random permutation oracle,
with probability 1, BQTime(T (n)) does not contain NP " co-NP.
Proof. For any permutation oracle A, let first bit of A \Gamma1 (y) is 1g. Clearly,
this language is contained in (NP " co-NP) A . Let T We show that for any
bounded-error oracle QTM M A running in time at most T (n), with probability 1, M A does
not accept the language LA . The probability is taken over the choice of a random permutation
oracle A. Then, since there are a countable number of QTMs and the intersection
of a countable number of probability 1 events still has probability 1, we conclude that with
probability 1, no bounded error oracle QTM accepts LA in time bounded by T (n).
pick n large enough so that T (n) - 2 n=3. We will show
that the probability that M gives the wrong answer on input 1 n is at least 1=8 for every
way of fixing the oracle answers on inputs of length not equal to n. The probability is taken
over the random choices of the permutation oracle for inputs of length n.
Consider the following method of defining random permutations on f0; 1g
be a sequence of strings chosen uniformly at random in f0; 1g n . Pick - 0
uniformly at random among permutations such that -(x 0
- is the transposition
each - i is a random permutation on f0; 1g n .
Consider a sequence of permutation oracles A i , such that A i
and A i . Denote by jOE i i the time i superposition of M A T (n) on
input 1 n , and by jOE 0
i the time i superposition of M A T (n)\Gamma1 on input 1 n . By construction,
with probability exactly 1=2, the string 1 n is a member of exactly one of the two languages
and LA T
. We will show that E[
Here the expectation
is taken over the random choice of the oracles. By Markov's bound, P [
3=4. Applying Theorem 3.1 we conclude that if
then the
acceptance probability of M A T (n) and M A T (n)\Gamma1 differ by at most 8=25 ! 1=3, and hence
either both machines accept input 1 n or both reject that input. Therefore M A T (n) and
give the same answers on input 1 n with probability at least 3=4. By construction,
the probability that the string 1 n belongs to exactly one of the two languages LA T (n)
and
is equal to P [first bit of x T first bit of x T (n) Therefore, we can
conclude that with probability at least 1=4, either M A T (n) or M A T (n)\Gamma1 gives the wrong
answer on input 1 n . Since each of A T (n) and A T (n)\Gamma1 are chosen from the same distribution,
we can conclude that M A T (n) gives the wrong answer on input 1 n with probability at
least 1=8.
To bound E[
we show that jOE T (n) i and jOE 0
are each close to
a certain superposition j/ T (n) i. To define this superposition, run M on input 1 n with
a different oracle on each step: on step i, use A i to answer the oracle queries. Denote
by j/ i i, the time i superposition that results. Consider the set of time-string pairs
Tg. It is easily checked that the oracle queries in the computation
described above and those of M A T (n) and M A T (n)+1 differ only on the set S. We claim
that the expected query magnitude of any pair in the set is at most 1=2 n , since for j - i,
we may think of x j as having been randomly chosen during step j, after the superposition
of oracle queries to be performed has already been written on the oracle tape. Let ff be the
sum of the query magnitudes for time-string pairs in S. Then
for " be a random variable such that (n). Then by Theorem 3.3,
showed above that
But E["=
s
Therefore E[
that E[
Finally, it is easy to conclude that M decides membership in LA with probability 0 for
a uniformly random permutation oracle A. 2
Note: In view of Grover's algorithm [13], we know that the constant ``1=2'' in the statement
of Theorem 3.5 cannot be improved. On the other hand, there is no evidence that the
constant "1=3" in the statement of Theorem 3.6 is fundamental. It may well be that
Theorem 3.6 would still hold (albeit not its current proof) with 1=2 substituted for 1=3.
Corollary 3.7 Relative to a random permutation oracle, with probability 1, there exists
a quantum one-way permutation. Given the oracle, this permutation can be computed
efficiently even with a classical deterministic machine, yet it requires exponential time to
invert even on a quantum machine.
Proof. Given an arbitrary permutation oracle A for which A \Gamma1 can be computed in time
n=3 ) on a quantum Turing machine, it is just as easy to decide LA as defined in the proof
of Theorem 3.6. It follows from that proof that this happens with probability 0 when A is
a uniformly random permutation oracle. 2
4 Using a Bounded-Error QTM as a Subroutine
The notion of a subroutine call or an oracle invocation provides a simple and useful abstraction
in the context of classical computation. Before making this abstraction in the context of
quantum computation, there are some subtle considerations that must be thought through.
For example, if the subroutine computes the function f , we would like to think of an invocation
of the subroutine on the string x as magically writing f(x) in some designated spot
(actually xoring it to ensure unitarity). In the context of quantum algorithms, this abstraction
is only valid if the subroutine cleans up all traces of its intermediate calculations, and
leaves just the final answer on the tape. This is because if the subroutine is invoked on a
superposition of x's, then different values of x would result in different scratch-work on the
tape, and would prevent these different computational paths from interfering. Since erasing
is not a unitary operation, the scratch-work cannot, in general, be erased post-facto. In the
special case where f can be efficiently computed deterministically, it is easy to design the
subroutine so that it reversibly erases the scratch-work-simply compute f(x), copy f(x)
into safe storage, and then uncompute f(x) to get rid of the scratch work [2]. However,
in the case that f is computed by a BQP machine, the situation is more complicated.
This is because only some of the computational paths of the machine lead to the correct
answer f(x), and therefore if we copy f(x) into safe storage and then uncompute f(x),
computational paths with different values of f(x) will no longer interfere with each other,
and we will not reverse the first phase of the computation. We show, nonetheless, that if
we boost the success probability of the BQP machine before copying f(x) into safe storage
and uncomputing f(x), then most of the weight of the final superposition has a clean tape
with only the input x and the answer f(x). Since such tidy BQP machines can be safely
used as subroutines, this allows us to show that BQP BQP. The result also justifies
our definition of oracle quantum machines.
The correctness of the boosting procedure is proved in Theorems 4.13 and 4.14. The
proof follows the same outline as in the classical case, except that we have to be much
more careful in simple programming constructs such as looping, etc. We therefore borrow
the machinery developed in [4] for this purpose, and present the statements of the relevant
lemmas and theorems in the first part of this section. The main new contribution in this
section is in the proofs of Theorems 4.13 and 4.14. The reader may therefore wish to skip
directly ahead to these proofs.
4.1 Some Programming Primitives for QTMs
In this subsection, we present several definitions, lemmas and theorems from [4].
Recall that a QTM M is defined by a triplet (\Sigma; Q; ffi) where: \Sigma is a finite alphabet with
an identified blank symbol #, Q is a finite set of states with an identified initial state q 0
and final state q f 6= q 0 , and ffi , the quantum transition function, is a function
where ~
C is the set of complex numbers whose real and imaginary parts can be approximated
to within 2 \Gamman in time polynomial in n.
Definition 4.1 A final configuration of a QTM is any configuration in state q f . If when
QTM M is run with input x, at time T the superposition contains only final configurations
and at any time less than T the superposition contains no final configuration, then M halts
with running time T on input x. The superposition of M at time T is called the final
superposition of M run on input x. A polynomial-time QTM is a well-formed QTM which
on every input x halts in time polynomial in the length of x.
Definition 4.2 A QTM M is called well-behaved if it halts on all input strings in a final
superposition where each configuration has the tape head in the same cell. If this cell is
always the start cell, we call the QTM stationary.
We will say that a QTM M is in normal form if all transitions from the distinguished
state q f lead to the distinguished state q 0 , the symbol in the scanned cell is left unchanged,
and the head moves right, say. Formally:
Definition 4.3 A QTM is in normal form if
Theorem 4.4 If f is a function mapping strings to strings which can be computed in
deterministic polynomial time and such that the length of f(x) depends only on the length
of x, then there is a polynomial-time, stationary, normal form QTM which given input x,
produces output x; f(x), and whose running time depends only on the length of x.
If f is a one-to-one function from strings to strings that such that both f and f \Gamma1 can be
computed in deterministic polynomial time, and such that the length of f(x) depends only on
the length of x, then there is a polynomial-time, stationary, normal form QTM which given
input x, produces output f(x), and whose running time depends only on the length of x.
Definition 4.5 A multi-track Turing machine with k tracks is a Turing machine whose
alphabet \Sigma is of the form \Sigma 1 \Theta \Sigma 2 \Theta \Delta \Delta \Delta \Theta \Sigma k with a special blank symbol # in each \Sigma i so that
the blank in \Sigma is (#). We specify the input by specifying the string on each "track"
(separated by ';'), and optionally by specifying the alignment of the contents of the tracks.
Lemma 4.6 Given any QTM and any set \Sigma 0 , there is a QTM
behaves exactly as M while leaving its second track unchanged
Lemma 4.7 Given any QTM
there is a QTM M such that the M 0 behaves exactly as M
except that its tracks are permuted according to -.
Lemma 4.8 If M 1 and M 2 are well-behaved, normal form QTMs with the same alphabet,
then there is a normal form QTM M which carries out the computation of M 1 followed by
the computation of M 2 .
Lemma 4.9 Suppose that M is a well-behaved, normal form QTM. Then there is a normal
such that on input x; k with k ? 0, the machine M 0 runs M for k iterations
on its first track.
Definition 4.10 If QTMs M 1 and M 2 have the same alphabet, then we say that
the computation of M 1 if the following holds: for any input x on which M 1 halts, let c x and
OE x be the initial configuration and final superposition of M 1 on input x. Then M 2 on input
the superposition OE x , halts with final superposition consisting entirely of configuration c x .
Note that for M 2 to reverse M 1 , the final state of M 2 must be equal to the initial state of
1 and vice versa.
Lemma 4.11 If M is a normal form QTM which halts on all inputs, then there is a normal
that reverses the computation of M with slowdown by a factor of 5.
Finally, recall the definition of the class BQP.
Definition 4.12 Let M be a stationary, normal form, multi-track QTM M whose last track
has alphabet f#; 0; 1g. We say that M accepts x if it halts with a 1 in the last track of the
start cell. Otherwise we say that M rejects x.
A QTM accepts the language L ' (\Sigma \Gamma #) with probability accepts with probability
at least p every string x 2 L and rejects with probability at least p every string
We define the class BQP (bounded-error quantum polynomial time)
as the set of languages which are accepted with probability 2=3 by some polynomial-time
QTM. More generally, we define the class BQTime(T (n)) as the set of languages which
are accepted with probability 2=3 by some QTM whose running time on any input of length
n is bounded by T (n).
4.2 Boosting and Subroutine Calls
Theorem 4.13 If QTM M accepts language L with probability 2=3 in time T (n) ? n,
with T (n) time-constructible, then for any " ? 0, there is a QTM M 0 which accepts L with
is polynomial in log 1=" but independent of n.
Proof. Let M be a stationary QTM which accepts the language L in time T (n).
We will build a machine that runs k independent copies of M and then takes the
majority vote of the k answers. On any input x, M will have some final superposition
of strings
If we call A the set of i for which x i has the correct answer M(x) then
running M on separate copies of its input k times will produce
i. Then the probability of seeing jx
i such that the
majority have the correct answer M(x) is the sum of jff i 1
2 such that the majority
of lie in A. But this is just like taking the majority of k independent coin flips
each with probability at least 2=3 of heads. Therefore there is some constant b such that
log 1=", the probability of seeing the correct answer will be at least 1 \Gamma ".
So, we will build a machine to carry out the following steps.
1. Compute
2. Write out k copies of the input x spaced out with 2n blank cells in between, and write
down k and n on other tracks.
3. Loop k times on a machine that runs M and then steps n times to the right.
4. Calculate the majority of the k answers and write it back in the start cell.
We construct the desired QTM by building a QTM for each of these four steps and then
dovetailing them together.
Since Steps 1, 2, and 4 require easily computable functions whose output length depend
only on k and the length of x, we can carry them out using well-behaved, normal form
QTMs, constructed using Theorem 4.4, whose running times also depend only on k and the
length of x.
So, we complete the proof by constructing a QTM to run the given machine k times.
First, using Theorem 4.4 we can construct a stationary, normal form QTM which drags the
integers k and n one square to the right on its work track. If we add a single step right
to the end of this QTM and apply Lemma 4.9, we can build a well-behaved, normal form
QTM moves which n squares to the right, dragging k and n along with it. Dovetailing this
machine after M , and then applying Lemma 4.9 gives a normal form QTM that runs M on
each of the k copies of the input. Finally, we can dovetail with a machine to return with k
and n to the start cell by using Lemma 4.9 two more times around a QTM which carries k
and n one step to the left. 2
The extra information on the output tape of a QTM can be erased by copying the desired
output to another track, and then running the reverse of the QTM. If the output is the
same in every configuration in the final superposition, then this reversal will exactly recover
the input. Unfortunately, if the output differs in different configurations, then saving the
output will prevent these configurations from interfering when the machine is reversed, and
the input will not be recovered. We show is the same in most of the final superposition,
then the reversal must lead us close to the input.
Theorem 4.14 If the language L is contained in the class BQTime(T (n)), with T (n) ? n
and T (n) time-constructible, then for any " ? 0, there is a QTM M 0 which accepts L with
" and has the following property. When run on input x of length n,
runs for time bounded by cT (n), where c is a polynomial in log 1=", and produces a final
superposition in which jxijL(x)i, with otherwise, has squared
magnitude at least 1 \Gamma ".
Proof. Let be a stationary, normal form QTM which accepts language L in
time bounded by T (n).
According to Theorem 4.13, at the expense of a slowdown by factor which is polynomial
in log 1=" but independent of n, we can assume that M accepts L with probability
on every input.
Then we can construct the desired M 0 by running M , copying the answer to another
track, and then running the reverse of M . The copy is easily accomplished with a simple
two-step machine that steps left and back right while writing the answer on a clean track.
Using Lemma 4.11, we can construct a normal form QTM M R which reverses M . Finally,
with appropriate use of Lemmas 4.6 and 4.7, we can construct the desired stationary QTM
by dovetailing machines M and M R around the copying machine.
To see that this M 0 has the desired properties, consider running M 0 on input x of
length n. M 0 will first run M on x producing some final superposition of configurations
y ff y jyi of M on input x. Then it will write a 0 or 1 in the extra track of the start cell
of each configuration, and run M R on this superposition
y ff y jyijb y i. If we were
to instead run M R on the superposition jOE
y ff y jyijM(x)i we would after T (n) steps
have the superposition consisting entirely of the final configuration with output x; M(x).
Clearly, hOEjOE 0 i is real, and since M has success probability at least
Therefore, since the time evolution of M R is unitary and hence preserves the inner product,
the final superposition of M 0 must have an inner product with jxijM(x)i which is real and
at least 1 \Gamma "=2. Therefore, the squared magnitude in the final superposition of M 0 of the
final configuration with output x; M(x) must be at least (1 \Gamma
Corollary 4.15 BQP
Acknowledgement
We wish to thank Bob Solovay for several useful discussions.
--R
"Arthur - Merlin games: A randomized proof system, and a hierarchy of complexity classes"
"Logical reversibility of computation"
"Relative to a random oracle A, P A 6= NP A 6= co-NP A with probability 1"
"Quantum complexity theory"
"The quantum challenge to structural complexity theory"
"Oracle quantum computing"
"Tight bounds on quantum searching"
"Learning DNF over uniform distribution using a quantum example oracle"
"Quantum theory, the Church-Turing principle and the universal quantum computer"
"Quantum computational networks"
"Rapid solution of problems by quantum computation"
"Simulating physics with computers"
"A fast quantum mechanical algorithm for database search"
"Phase information in quantum oracle computing"
"Algorithms for quantum computation: Discrete logarithms and factoring"
"On the power of quantum computation"
"Quantum circuit complexity"
--TR
--CTR
Feng Lu , Dan C. Marinescu, An R || Cmax Quantum Scheduling Algorithm, Quantum Information Processing, v.6 n.3, p.159-178, June 2007
Peter W. Shor, Why haven't more quantum algorithms been found?, Journal of the ACM (JACM), v.50 n.1, p.87-90, January
Marcello Frixione, Tractable Competence, Minds and Machines, v.11 n.3, p.379-397, August 2001
Alp Atici , Rocco A. Servedio, Improved Bounds on Quantum Learning Algorithms, Quantum Information Processing, v.4 n.5, p.355-386, November 2005
Lov K. Grover , Jaikumar Radhakrishnan, Is partial quantum search of a database any easier?, Proceedings of the seventeenth annual ACM symposium on Parallelism in algorithms and architectures, July 18-20, 2005, Las Vegas, Nevada, USA
Alex Fabrikant , Tad Hogg, Graph coloring with quantum heuristics, Eighteenth national conference on Artificial intelligence, p.22-27, July 28-August 01, 2002, Edmonton, Alberta, Canada
George F. Viamontes , Igor L. Markov , John P. Hayes, Is Quantum Search Practical?, Computing in Science and Engineering, v.7 n.3, p.62-70, May 2005
Mika Hirvensalo, Quantum computing Facts and folklore, Natural Computing: an international journal, v.1 n.1, p.135-155, May 2002
Mark Adcock , Richard Cleve , Kazuo Iwama , Raymond Putra , Shigeru Yamashita, Quantum lower bounds for the Goldreich-Levin problem, Information Processing Letters, v.97 n.5, p.208-211, March 2006
Akinori Kawachi , Hirotada Kobayashi , Takeshi Koshiba , Raymond H. Putra, Universal test for quantum one-way permutations, Theoretical Computer Science, v.345 n.2-3, p.370-385, 22 November 2005
Maciej Gowin, On the Complexity of Searching for a Maximum of a Function on a Quantum Computer, Quantum Information Processing, v.5 n.1, p.31-41, February 2006
Dorit Aharonov , Alexei Kitaev , Noam Nisan, Quantum circuits with mixed states, Proceedings of the thirtieth annual ACM symposium on Theory of computing, p.20-30, May 24-26, 1998, Dallas, Texas, United States
Harry Buhrman , Richard Cleve , Avi Wigderson, Quantum vs. classical communication and computation, Proceedings of the thirtieth annual ACM symposium on Theory of computing, p.63-68, May 24-26, 1998, Dallas, Texas, United States
Damien Woods , Thomas J. Naughton, An optical model of computation, Theoretical Computer Science, v.334 n.1-3, p.227-258, 11 April 2005
Ashwin Nayak , Felix Wu, The quantum query complexity of approximating the median and related statistics, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.384-393, May 01-04, 1999, Atlanta, Georgia, United States
Stephen Fenner , Lance Fortnow , Stuart A. Kurtz , Lide Li, An oracle builder's toolkit, Information and Computation, v.182 n.2, p.95-136, 01 May
Andris Ambainis , Ashwin Nayak , Ammon Ta-Shma , Umesh Vazirani, Dense quantum coding and a lower bound for 1-way quantum automata, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.376-383, May 01-04, 1999, Atlanta, Georgia, United States
Michele Mosca, Counting by quantum eigenvalue estimation, Theoretical Computer Science, v.264 n.1, p.139-153, 08/06/2001
A. Papageorgiou , H. Woniakowski, The Sturm-Liouville Eigenvalue Problem and NP-Complete Problems in the Quantum Setting with Queries, Quantum Information Processing, v.6 n.2, p.101-120, April 2007
Tetsuro Nishino, Mathematical models of quantum computation, New Generation Computing, v.20 n.4, p.317-337, October 2002
An introduction to quantum computing for non-physicists, ACM Computing Surveys (CSUR), v.32 n.3, p.300-335, Sept. 2000
Andris Ambainis, Quantum lower bounds by quantum arguments, Journal of Computer and System Sciences, v.64 n.4, p.750-767, June 2002
Andris Ambainis, Quantum lower bounds by quantum arguments, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.636-643, May 21-23, 2000, Portland, Oregon, United States
Lov K. Grover, A framework for fast quantum mechanical algorithms, Proceedings of the thirtieth annual ACM symposium on Theory of computing, p.53-62, May 24-26, 1998, Dallas, Texas, United States
Howard Barnum , Michael Saks, A lower bound on the quantum query complexity of read-once functions, Journal of Computer and System Sciences, v.69 n.2, p.244-258, September 2004
Markus Hunziker , David A. Meyer, Quantum Algorithms for Highly Structured Search Problems, Quantum Information Processing, v.1 n.3, p.145-154, June 2002
Scott Aaronson, Quantum lower bound for the collision problem, Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, May 19-21, 2002, Montreal, Quebec, Canada
Umesh Vazirani, Fourier transforms and quantum computation, Theoretical aspects of computer science: advanced lectures, Springer-Verlag New York, Inc., New York, NY, 2002
Tarsem S. Purewal, Jr., Revisiting a limit on efficient quantum computation, Proceedings of the 44th annual southeast regional conference, March 10-12, 2006, Melbourne, Florida
van Dam , Sean Hallgren , Lawrence Ip, Quantum algorithms for some hidden shift problems, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Sean Hallgren , Cristopher Moore , Martin Rtteler , Alexander Russell , Pranab Sen, Limitations of quantum coset states for graph isomorphism, Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, May 21-23, 2006, Seattle, WA, USA
Andrew Chi-Chih Yao, Graph entropy and quantum sorting problems, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Hartmut Klauck, Quantum time-space tradeoffs for sorting, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Yaoyun Shi, Quantum and classical tradeoffs, Theoretical Computer Science, v.344 n.2-3, p.335-345, 17 November 2005
Mika Hirvensalo, Computing with quanta-impacts of quantum theory on computation, Theoretical Computer Science, v.287 n.1, p.267-298, 25 September 2002
Marco Carpentieri, On the simulation of quantum Turing machines, Theoretical Computer Science, v.304 n.1-3, p.103-128, 28 July
Harry Buhrman , Lance Fortnow , Ilan Newman , Hein Rhrig, Quantum property testing, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, January 12-14, 2003, Baltimore, Maryland
Harumichi Nishimura , Masanao Ozawa, Computational complexity of uniform quantum circuit families and quantum Turing machines, Theoretical Computer Science, v.276 n.1-2, p.147-181, April 6, 2002
Robert Beals , Harry Buhrman , Richard Cleve , Michele Mosca , Ronald de Wolf, Quantum lower bounds by polynomials, Journal of the ACM (JACM), v.48 n.4, p.778-797, July 2001
Holger Spakowski , Mayur Thakur , Rahul Tripathi, Quantum and classical complexity classes: separations, collapses, and closure properties, Information and Computation, v.200 n.1, p.1-34, 1 July 2005
Scott Aaronson , Yaoyun Shi, Quantum lower bounds for the collision and the element distinctness problems, Journal of the ACM (JACM), v.51 n.4, p.595-605, July 2004
Colin P. Williams, Quantum Search Algorithms in Science and Engineering, IEEE MultiMedia, v.3 n.2, p.44-51, March 1996
Hirotada Kobayashi , Keiji Matsumoto, Quantum multi-prover interactive proof systems with limited prior entanglement, Journal of Computer and System Sciences, v.66 n.3, p.429-450, May
Miklos Santha , Mario Szegedy, Quantum and classical query complexities of local search are polynomially related, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Scott Aaronson, Lower bounds for local search by quantum arguments, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, p.465-474, June 13-16, 2004, Chicago, IL, USA
A. Ambainis, Quantum search algorithms, ACM SIGACT News, v.35 n.2, June 2004
Frederic Magniez , Ashwin Nayak , Jeremie Roland , Miklos Santha, Search via quantum walk, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Andris Ambainis, Polynomial degree vs. quantum query complexity, Journal of Computer and System Sciences, v.72 n.2, p.220-238, March 2006
Lance Fortnow, One complexity theorist's view of quantum computing, Theoretical Computer Science, v.292 n.3, p.597-610, 31 January
Ronald de Wolf, Quantum communication and complexity, Theoretical Computer Science, v.287 n.1, p.337-353, 25 September 2002
Peter W. Shor, Progress in Quantum Algorithms, Quantum Information Processing, v.3 n.1-5, p.5-13, October 2004
Marco Lanzagorta , Jeffrey K. Uhlmann, Hybrid quantum-classical computing with applications to computer graphics, ACM SIGGRAPH 2005 Courses, July 31-August
Scott Aaronson, Guest Column: NP-complete problems and physical reality, ACM SIGACT News, v.36 n.1, March 2005 | quantum Turing machines;quantum polynomial time;oracle quantum Turing machines |
264525 | A Prioritized Multiprocessor Spin Lock. | AbstractIn this paper, we present the PR lock, a prioritized spin lock mutual exclusion algorithm. The PR lock is a contention-free spin lock, in which blocked processes spin on locally stored or cached variables. In contrast to previous work on prioritized spin locks, our algorithm maintains a pointer to the lock holder. As a result, our spin lock can support operations on the lock holder (e.g., for abort ceiling protocols). Unlike previous algorithms, all work to maintain a priority queue is done while a process acquires a lock when it is blocked anyway. Releasing a lock is a constant time operation. We present simulation results that demonstrate the prioritized acquisition of locks, and compare the performance of the PR lock against that of the best alternative prioritized spin lock. | Introduction
Mutual exclusion is a fundamental synchronization primitive for exclusive access to critical sections or shared
resources on multiprocessors [17]. The spin-lock is one of the mechanisms that can be used to provide mutual
exclusion on shared memory multiprocessors [2]. A spin-lock usually is implemented using atomic read-
modify-write instructions such as Test&Set or Compare&Swap, which are available on most shared-memory
multiprocessors [16]. Busy waiting is effective when the critical section is small and the processor resources
are not needed by other processes in the interim. However, a spin-lock is usually not fair, and a naive
implementation can severely limit performance due to network and memory contention [1, 11]. A careful
design can avoid contention by requiring processes to spin on locally stored or cached variables [19].
In real time systems, each process has timing constraints and is associated with a priority indicating the
urgency of that process [26]. This priority is used by the operating system to order the rendering of services
among competing processes. Normally, the higher the priority of a process, the faster it's request for services
gets honored. When the synchronization primitives disregard the priorities, lower priority processes may
block the execution of a process with a higher priority and a stricter timing constraint [24, 23]. This priority
may cause the higher priority process to miss its deadline, leading to a failure of the real time
system. Most of the work done in synchronization is not based on priorities, and thus is not suitable for real
time systems. Furthermore, general purpose parallel processing systems often have processes that are "more
important" than others (kernel processes, processes that hold many locks, etc. The performance of such
systems will benefit from prioritized access to critical sections.
In this paper, we present a prioritized spin-lock algorithm, the PR-lock. The PR-lock algorithm is suitable
for use in systems which either use static-priority schedulers, or use dynamic-priority schedulers in which
the relative priorities of existing tasks do not change while blocked (such as Earliest Deadline First [26] or
Minimum Laxity [15]). The PR-lock is a contention-free lock [19], so its use will not create excessive network
or memory contention. The PR-lock maintains a queue of records, with one record for each process that
has requested but not yet released the lock. The queue is maintained in sorted order (except for the head
record) by the acquire lock operations, and the release lock operation is performed in constant time. As a
result, the queue order is maintained by processes that are blocked anyway, and a high priority task does not
perform work for a low priority task when it releases the lock. The lock keeps a pointer to the record of the
lock holder, which aids in the implementation of priority inheritance protocols [24, 23]. A task's lock request
and release are performed at well-defined points in time, which makes the lock predictable. We present a
correctness proof, and simulation results which demonstrate the prioritized lock access, the locality of the
references, and the improvement over a previously proposed prioritized spin lock.
We organize this paper as follows. In Section 1.1 we describe previous work in this area and in Section 2,
we present our algorithm. In Section 3 we argue the correctness of our algorithm. In Section 4 we discuss
an extension to the algorithm presented in Section 2. In Section 5 we show the simulation results which
compare the performance of the PR-lock against that of other similar algorithms. In Section 6 we conclude
this paper by suggesting some applications and future extensions to the PR-lock algorithm.
1.1 Previous Work
Our PR-lock algorithm is based on the MCS-lock algorithm, which is a spin-lock mutual exclusion algorithm
for shared-memory multiprocessors [19]. The MCS-lock grants lock requests in FIFO order, and blocked
processes spin on locally accessible flag variables only, avoiding the contention usually associated with busy-waiting
in multiprocessors [1, 11]. Each process has a record that represents its place in the lock queue. The
MCS-lock algorithm maintains a pointer to the tail of the lock queue. A process adds itself to the queue
by swapping the current contents of the tail pointer for the address of its record. If the previous tail was
nil, the process acquired the lock. Otherwise, the process inserts a pointer to its record in the record of
the previous tail, and spins on a flag in its record. The head of the queue is the record of the lock holder.
The lock holder releases the lock by reseting the flag of its successor record. If no successor exists, the lock
holder sets the tail pointer to nil using a Compare&Swap instruction.
Molesky, Shen, and Zlokapa [20] describe a prioritized spin lock that uses the test-and-set instruction.
Their algorithm is based on Burn's fair test-and-set mutual exclusion algorithm [5]. However, this lock is
not contention-free.
Markatos and LeBlanc [18] presents a prioritized spin-lock algorithm based on the MCS-lock algorithm.
Their acquire lock algorithm is almost the same as the MCS acquire lock algorithm, with the exception that
Markatos' algorithm maintains a doubly linked list. When the lock holder releases the lock, it searches for
the highest priority process in the queue. This process' record is moved to the head of the queue, and its
flag is reset. However, the point at which a task requests or releases a lock is not well defined, and the lock
holder might release the lock to a low priority task even though a higher priority task has entered the queue.
In addition, the work of maintaining the priority queue is performed when a lock is released. This choice
makes the time to release a lock unpredictable, and significantly increases the time to acquire or release a
lock (as is shown in section 5). Craig [10] proposes a modification to the MCS lock and to Markatos' lock
that substitutes an atomic Swap for the Compare&Swap instruction, and permits nested locks using only
one lock record per process.
Goscinski [12] develops two algorithms for mutual exclusion for real time distributed systems. The
algorithms are based on token passing. A process requests the critical section by broadcasting its intention
to all other processes in the system. One algorithm grants the token based on the priorities of the processes,
whereas the other algorithm grants the token to processes based on the remaining time to run the processes.
The holder of the token enters the critical section.
The utility of prioritized locks is demonstrated by rate monotonic scheduling theory [9, 24]. Suppose
there are N periodic processes on a uniprocessor. Let E i and C i represent the execution
time and the cycle time (periodicity) of the process T i . We assume that C 1 - C 2 - CN . Under the
assumption that there is no blocking, [9] show that if for each j
Then all processes can meet their deadlines.
Suppose that B j is the worst case blocking time that process T j will incur. Then [24] show that all tasks
can meet their deadlines if
Thus, the blocking of a high priority process by a lower priority process has a significant impact on the
ability of tasks to meet their deadlines. Much work has been done to bound the blocking due to lower priority
processes. For example, the Priority Ceiling protocol [24] guarantees that a high priority process is blocked
by a lower priority process for the duration of at most one critical section. The Priority Ceiling protocol has
been extended to handle dynamic-priority schedulers [7] and multiprocessors [23, 8].
Our contribution over previous work in developing prioritized contention-free spin locks ([18] and [10]) is
to more directly implement the desired priority queue. Our algorithm maintains a pointer to the head of the
lock queue, which is the record of the lock holder. As a result, the PR-lock can be used to implement priority
inheritance [24, 23]. The work of maintaining priority ordering is performed in the acquire lock operation,
when a task is blocked anyway. The time required to release a lock is small and predictable, which reduces
the length and the variance of the time spent in the critical section. The PR-lock has well-defined points
in time in which a task joins the lock queue and releases its lock. As a result, we can guarantee that the
highest priority waiting task always receives the lock. Finally, we provide a proof of correctness.
Our PR-lock algorithm is similar to the MCS-lock algorithm in that both maintain queues of blocked processes
using the Compare&Swap instruction. However, while the MCS-lock and Markatos' lock maintain a global
pointer to the tail of the queue, the PR-lock algorithm maintains a global pointer to the head of the queue.
In both the MCS-lock and the Markatos' lock, the processes are queued in FIFO order, whereas in the
PR-lock, the queue is maintained in priority order of the processes.
2.1 Assumptions
We make the following assumptions about the computing environment:
1. The underlying multiprocessor architecture supports an atomic Compare&Swap instruction. We note
that many parallel architectures support this instruction, or a related instruction [13, 21, 3, 28].
2. The multiprocessor has shared memory with coherent caches, or has locally-stored but globally-
accessible shared memory.
3. Each processor has a record to place in the queue for each lock. In a NUMA architecture, this record
is allocated in the local, but globally accessible, memory. This record is not used for any other purpose
for the lifetime of the queue. In Section 4, we allow the record to be used among many lock queues.
4. The higher the actual number assigned for priority, the higher the priority of a process (we can also
assume the opposite).
5. The relative priorities of blocked processes do not change. Acceptable priority assignment algorithms
include Earliest Deadline First and Minimum Laxity.
It should be noted that each process p i participating in the synchronization can be associated with an
unique processor P i . We expect that the queued processes will not be preempted, though this is not a
requirement for correctness.
2.2 Implementation
The PR-lock algorithm consists of two operations. The acquire lock operation acquires a designated lock
and the release lock operation releases the lock. Each process uses the acquire lock and release lock
operations to synchronize access to a resource:
acquire lock(L, r)
critical section
release lock(L)
The following sub-sections present the required version of Compare&Swap, the needed data structures,
and the acquire lock and release lock procedures.
2.2.1 The Compare&Swap
The PR-lock algorithms make use of the Compare&Swap instruction, the code for which is shown in Figure 1.
Compare&Swap is often used on pointers to object records, where a record refers to the physical memory
space and an object refers to the data within a record. Current is a pointer to a record, Old is a previously
sampled value of Current, and New is a pointer to a record that we would like to substitute for *Old (the
record pointed to by Old). We compute the record *New based on the object in *Old (or decide to perform
the swap based on the object in *Old), so we want to set Current equal to New only if Current still points
to the record *Old. However, even if Current points to *Old, it might point to a different object than the
one originally read. This will occur if *Old is removed from the data structure, then re-inserted as Current
with a new object. This sequence of events cannot be detected by the Compare&Swap and is known as the
A-B-A problem.
Following the work of Prakash et al. [22] and Turek et al. [27], we make use of a double-word Com-
pare&Swap instruction [21] to avoid this problem. A counter is appended to Current which is treated as
a part of Current. Thus Current consists of two parts: the value part of Current and the counter part of
Current. This counter is incremented every time a modification is made to *Current. Now all the variables
Procedure CAS(structure pointer *Current, *Old, *New)
/* Assume CAS operates on double words */
atomicf
*Current == *Old
else f
Figure
1: CAS used in the PR-lock Algorithm
Current, Old , and New are twice their original size. This approach reduces the probability of occurrence of
the A-B-A problem to acceptable levels for practical applications. If a double-word Compare&Swap is not
available, the address and counter can be packed into 32 bits by restricting the possible address range of the
lock records.
We use a version of the Compare&Swap operation in which the current value of the target location is
returned in old, if the Compare&Swap fails. The semantics of the Compare&Swap used is given in Figure 1.
A version of the Compare&Swap instruction that returns only TRUE or FALSE can be used by performing an
additional read.
2.2.2 Data Structures
The basic data structure used in the PR-lock algorithm is a priority queue. The lock L contains a pointer to
the first record of the queue. The first record of the queue belongs to the process currently using the lock.
If there is no such process, then L contains nil.
Each process has a locally-stored but globally-accessible record to insert into the lock queue. If process
inserts record q into the queue, we say that q is p's record and p is q's process. The record contains the
process priority, the next-record pointer, a boolean flag Locked on which the process owning the element
busy-waits if the lock is not free, and an additional field Data that can be used to store application-dependent
information about the lock holder.
The next-record pointer is a double sized variable: one half is the actual pointer and the other half is a
counter to avoid the A-B-A problem. The counter portion of the pointer itself has into two parts: one bit
of the counter called the Dq bit is used to indicate whether the queuing element is in the queue. The rest of
the bits are used as the actual counter. This technique is similar to the one used by Prakash et al. [22] and
Turek et al. [27]. Their counter refers to the record referenced by the pointer. In our algorithm, the counter
refers to the record that contains the pointer, not the record that is pointed to.
If the Dq bit of a record q is FALSE, then the record is in the queue for a lock L. If the Dq bit is TRUE,
then the record is probably not in the queue (for a short period of time, the record might be in the queue
with its Dq bit set TRUE). The Dq bit lets the PR-lock avoid garbage accesses.
Each process keeps the address of its record in a local variable (Self). In addition, each process requires
two local pointer variables to hold the previous and the next queue element for navigating the queue during
the enqueue operation (Prev Node and Next Node).
The data structures used are shown in Figure 2. The Dq bit of the Pointer field is initialized to TRUE,
and the Ctr field is initialized to 0 before the record is first used.
A typical queue formed by the PR-lock algorithm is shown in Figure 3 below. Here L points to the record
q 0 of the current process holding the lock. The record q 0 has a pointer to the record q 1 of the next process
having the highest priority among the processes waiting to acquire the lock L. Record q 1 points to record q 2
of the next higher priority waiting process and so on. The record q n belongs to the process with the least
priority among waiting processes.
2.2.3 Acquire Lock Operation
The acquire lock operation is called by a process -
before using the critical section or resource guarded
by lock L. The parameters of the acquire lock operation are the lock pointer L and the record -
q of the
process (passed to local variable Self).
An acquire lock operation searches for the correct position to insert -
q into the queue using Prev Node
and Next Node to keep track of the current position. In Figure 4, Prev Node and Next Node are abbreviated
to P and N. The records pointed by P and N are q i and q i+1 , belonging to processes p i and p i+1 . Process
positions itself so that P r(p i r is a function which maps a process to its
priority. Once such a position is found, -
q is prepared for insertion by making -
q point to q i+1 . Then, the
insertion is committed by making q i to point to -
q by using the Compare&Swap instruction. The various
stages and final result are shown in Figure 4.
The acquire lock algorithm is given in Figure 5. Before the acquire lock procedure is called, the Data
and the Priority fields of the process' record are initialized appropriately. In addition, the Dq bit of the
Next pointer is implicitly TRUE.
The acquire lock operation begins by assuming that the lock is currently free (the lock pointer L is
structure Pointer f
structure Object *Ptr;
boolean Dq;
structure Record f
structure structure of data Data;
boolean Locked;
integer Priority;
structure Pointer Next;
Shared Variable
structure Pointer L;
Private Variables
structure Pointer Self, Prev Node, Next node;
boolean Success, Failure;
constant TRUE, FALSE, NULL, MAX PRIORITY;
Data
Priority
Next.Ctr
Next.Ptr
Locked
Next.Dq
Record Structure
Figure
2: Data Structures used in the PR-lock Algorithm
Figure
3: Queue data structure used in PR-lock algorithm
Start
Position
Prepare
Commit
Figure
4: Stages in the acquire lock operation
null). It attempts to change L to point to its own record with the Compare&Swap instruction. If the
Compare&Swap is successful, the lock is indeed free, so the process acquires the lock without busy-waiting.
In the context of the composite pointer structures that the algorithm uses, a NULL pointer is all zeros.
If the swap is unsuccessful, then the acquiring process traverses the queue to position itself between a
higher or equal priority process record and a lower priority process record. Once such a junction is found,
will point to the record of the higher priority process and Next Node will point to the record of
the lower priority process. The process first sets its link to Next Node. Then, it attempts to change the
previous record's link to its own record by the atomic Compare&Swap.
If successful, the process sets the Dq flag in its record to FALSE indicating its presence in the queue. The
process then busy-waits until its Locked bit is set to FALSE, indicating that it has been admitted to the
critical section.
There are three cases for an unsuccessful attempt at entering the queue. Problems are detected by
examining the returned value of the failed Compare&Swap marked as F in the algorithm. Note that the
returned value is in the Next Node. In addition, a process might detect that it has misnavigated while
searching the queue. When we read Next Node, the contents of the record pointed to by Prev Node are fixed
because the record's counter is read into Next Node.
1. A concurrent acquire lock operation may overtake the acquire lock operation and insert its own
Procedure acquire lock(L, Self) f
do f
else f /* Lock in Use */
do f
Next Node=Prev Node.Ptr-?Next;
if((Next Node.Dq==TRUE) /* Deque, Try Again */ ii
or (Prev Node.Ptr-?Priority!Self.Ptr-?Priority)) f iii
else f
if(Next Node.Ptr==NULL or (Next Node.Ptr!=NULL and
Next Node.Ptr-?Priority!Self.Ptr-?Priority))f
use lock */
else f
if((Next Node.Dq==TRUE) /* Deque, Try Again */ ii
or Prev Node.Ptr-?Priority !
else
Next Node=Prev Node; i
gwhile(!Success and !Failure);
Figure
5: The acquire lock operation procedure
record immediately after Prev Node, as shown in Figure 10. In this case the Compare&Swap will fail
at the position marked F in Figure 5. The correctness of this operation's position is not affected, so
the operation continues from its current position (line marked by i in Figure 5).
2. A concurrent release lock operation may overtake the acquire lock operation and removes the record
pointed to by Prev Node, as shown in Figure 11. In this case, the Dq bit in the link pointer of this
record will be TRUE. The algorithm checks for this condition when it scans through the queue and
when it tries to commit its modifications. The algorithm detects the situation in the two places marked
by ii in the Figure 5. Every time a new record is accessed (by Prev Node), its link pointer is read into
Next Node and the Dq bit is checked. In addition, if the Compare&Swap fails, the link pointer is saved
in Next Node and the Dq bit is tested. If the Dq bit is TRUE, the algorithm starts from the beginning.
3. A concurrent release lock operation may overtake the acquire lock operation and remove the record
pointed to by Prev Node, and then the record is put back into the queue, as shown in Figure 12. If the
record returns with a priority higher than or equal to Self's priority, then the position is still correct
and the operation can continue. Otherwise, the operation cannot find the correct insertion point, so it
has to start from the beginning. This condition is tested at the lines marked iii in Figure 5.
The spin-lock busy waiting of a process is broken by the eventual release of the lock by the process which
is immediately ahead of the waiting process.
2.2.4 Release Lock Operation
The release lock operation is straight forward and the algorithm is given in Figure 6. The process p
releasing the lock sets the Dq bit in its record's Link pointer to TRUE, indicating that the record is no
longer in the queue. Setting the Dq bit prevents any acquire lock operation from modifying the link. The
releasing process copies the address of the successor record, if any, to L. The process then releases the lock by
setting the Locked boolean variable in the record of the next process waiting to be FALSE. To avoid testing
special cases in the acquire lock operation, the priority of the head record is set to the highest possible
priority.
3 Correctness of PR-lock Algorithm
In this section, we present an informal argument for the correctness properties of our PR-lock algorithm.
We prove that the PR-lock algorithm is correct by showing that it maintains a priority queue, and the head
Procedure release lock(L, Self)f
L=Self.Ptr-?Next; /* Release Lock */
if(Self.Ptr-?Next!=NULL)f
Figure
The release lock operation procedure
of the priority queue is the process that holds the lock. The PR-lock is decisive-instruction serializable [25].
Both operations of the PR-lock algorithm have a single decisive instruction. The decisive instruction for the
acquire lock operation is the successful Compare&Swap and the decisive instruction for the release lock
operation is setting the Dq bit. Corresponding to a concurrent execution C of the queue operations, there is
an equivalent (with respect to return values and final states) serial execution S d such that if operation O 1
executes its decisive instruction before operation O 2 does in C, then O 1 ! O 2 in S d . Thus, the equivalent
priority queue of a PR-lock is in a single state at any instant, simplifying the correctness proof (a concurrent
data structure that is linearizable but not decisive-instruction serializable might be in several states
simultaneously [14]).
We use the following notation in our discussion. PR-lock L has lock pointer L, which points to the first
record in the lock queue (and the record of the process that holds the lock). Let there be N processes p 1 ,
that participate in the lock synchronization for a priority lock L, using the PR-lock algorithm.
As mentioned earlier, each process p i allocates a record q i to enqueue and dequeue. Thus, each process p i
participating in the lock access is associated with a queue record q i . Let P r(p i ) be a function which maps
a process to its priority, a number between 1 and N. We also define another function P r(q i ) which maps a
record belonging to a process p i to its priority.
A priority queue is an abstract data type that consists of:
ffl A finite set Q of elements
. For simplicity, we assume that every n i is unique. This
assumption is not required for correctness, and in fact processes of the same priority will obtain the
lock in FCFS order.
ffl Two operations enqueue and dequeue
At any instant, the state of the queue can be defined as
We call q 0 the head record of priority queue Q. The head record's process is the current lock holder. Note
that the non-head records are totally ordered.
The enqueue operation is defined as
enqueue
where
The dequeue operation on a non-empty queue is defined as
where the return value is q 0 . A dequeue operation on an empty queue is undefined.
For every PR-lock L, there is an abstract priority queue Q. Initially, both L and Q are empty. When a
process -
p with a record -
q performs the decisive instruction for the acquire lock operation, Q changes state
to enqueue (Q; - q). Similarly, when a process executes the decisive instruction for a release lock operation,
Q changes state to dequeue(Q).
We show that when we observe L, we find a structure that is equivalent to Q. To observe L, we take a
consistent snapshot [6] of the current state of the system memory. Next, we start at the lock pointer L and
observe the records following the linked list. If the head record has its Dq bit set and its process has exited
the acquire lock operation, then we discard it from our observation. If we observe the same records in the
same sequence in both L and Q, then we say that L and Q are equivalent, and we write L , Q.
Theorem 1 The representative priority queue Q is equivalent to the observed queue of the PR-lock L.
Proof: We prove the theorem by induction on the decisive instructions, using the following two lemmas.
before a release lock decisive instruction, then Q , L after the release lock decisive
instruction.
Proof: Let release lock decisive instruction. A release lock operation
is equivalent to a dequeue operation on the abstract queue. By definition,
q1 qn
Before
After
Figure
7: Observed queue L before and after a release lock
Before After
Figure
8: Observed queue L before and after an acquire lock
The before and after states of L are shown in Figure 7. If L points to the record q 0 before the release lock
decisive instruction, the release lock decisive instruction sets the Dq bit in q 0 to TRUE, removing q 0 from
the observable queue. Thus, Q , L after the release lock operation. Note that L will point to q 1 before
the next release lock decisive instruction. 2
before an acquire lock decisive instruction, then Q , L after the acquire lock
decisive instruction.
Proof: There are two different cases to consider:
Case 1: before the acquire lock decisive instruction. The equivalent operation on the abstract
queue Q is the enqueue operation. Thus,
If the lock L is empty, -
q's process executes a successful decisive Compare&Swap instruction to make L
to point to -
q and acquires the lock (Figure 8).
Clearly, Q , after the acquire lock decisive instruction.
Case 2: before the acquire lock decisive instruction. The state of the queue Q
after the acquire lock is given by
The corresponding L before and after the acquire lock is shown in Figure 9. The pointers P and N are
the Prev Node and Next Node pointers by which -
q's acquire lock operation positions its record such that the
process observes P r(q the Next pointer in -
q is is set to the address of q i+1 . The
Before
After
Figure
9: Observed queue L before and after an acquire lock
Compare&Swap instruction, marked F in Figure 5, attempts to make the Next pointer in q i point to -
q. If the
Compare&Swap instruction succeeds, then it is the decisive instruction of -
q's process and the resulting queue
L is illustrated in the Figure 9. This is equivalent to Q after the enqueue operation. If the Compare&Swap
succeeds only when q i is in the queue, q i+1 is the successor record, and P r(q
If there are no concurrent operations on the queue, we can observe that the P and N are positioned
correctly and the Compare&Swap succeeds. If there are other concurrent operations, they can interfere with
the execution of an acquire lock operation, A. There are three possibilities:
Case a: Another acquire lock A' enqueued its record q 0 between q i and q i+1 , but q i has not yet been
dequeued. If P r(q
q's process will attempt to insert -
q between q i and q i+1 .
Process A 0 has modified q i 's next pointer, so that -
q's Compare&Swap will fail. Since q i has not been
dequeued, process should continue its search from q i , which is what happens. If
q's process can skip over q 0 and continue searching from q i+1 , which is what happens.
This scenario is illustrated in Figure 10.
Case b: A release lock operation R overtakes A and removes q i from the queue (i.e., R has set q i 's
Dq bit), and q i has not yet been returned to the queue (its Dq bit is still false). Since q i is not in the
lock queue, A is lost and must start searching again. Based on its observations of q i and q i+1 , A may have
decided to continue searching the queue or to commit its operation. In either case A sees the Dq bit set and
fails, so A starts again from the beginning of the queue. This scenario is illustrated in Figure 11
Case c: A release lock operation R overtakes A and removes q i from the queue, and then q i is put
back in the queue by another acquire lock A'. If A tries to commit its operation, then the pointer in q i is
changed, so the Compare&Swap fails. Note that even if q i is pointing to q i+1 , the version numbers prevent
the decisive instruction from succeeding. If A continues searching, then there are two possibilities based on
the new value of P r(q lost and cannot find the correct place to insert -
q. This
condition is detected when the priority of q i is examined (the lines marked iii in Figure 5), and operation A
restarts from the head of the queue. If P can still find a correct place to insert -
past
q'
q'
Before A'
After A'
Continue A
Figure
10: A concurrent acquire lock A' succeeds before A
F
Before R
After R
Restart A
Figure
11: A concurrent release lock R succeeds before A
continues searching. This scenario is illustrated in Figure 12.
what interference occurs, A always takes the right action. Therefore, Q , L after the
acquire lock decisive instruction. 2
To prove the theorem we use induction. Initially, points to nil. So, Q , L is trivially
true. Suppose that the theorem is true before the i th decisive instruction. If the i th decisive instruction is
for an acquire lock operation, Lemma after the i th decisive instruction. If the i th decisive
instruction is for a release lock operation, Lemma 1 after the i th decisive instruction. Therefore,
the inductive step holds, and hence, Q , L. 2
Extensions
In this section we discuss a couple of simple extensions that increase the utility of the PR-lock algorithm.
4.1 Multiple Locks
As described, a record for a PR-lock can be used for one lock queue only (otherwise, a process might obtain
a lock other than the one it desired). If the real-time system has several critical sections, each with their
own locks (which is likely), each process must have a lock record for each lock queue, which wastes space.
Fortunately, a simple extension of the PR-lock algorithm allows a lock record to be used in many different
lock queues. We replace the Dq bit by a Dq string of l bits. If the Dq string evaluates to i ? 0 when interpreted
ri+1
ri+1
rm
rm
Restart A if Pr(q^) > Pr(qi)
Continue A if Pr(q^) <= Pr(qi)
After R and A'
Before R
Figure
12: Release lock R and acquire lock A' succeed before A
as a binary number, then the record in in the queue for lock i. If the Dq string evaluates to 0, then the
record is (probably) not in any queue. The acquire lock and release lock algorithms carry through by
modifying the test for being or not being in queue i appropriately.
We note that if a process sets nested locks, a new lock record must be used for each level of nesting.
Craig [10] presents a method for reusing the same record for nested locks.
4.2 Backing Out
If a process does not obain the lock after a certain deadline, it might wish to stop waiting and continue
processing. The process must first remove its record from the lock queue. To do so, the process follows these
steps:
1. Find the preceding record in the lock queue, using the method from the algorithm for the acquire lock
operation. If the process determines that its record is at the head of the lock queue, return with a
"lock obtained" value.
2. Set the Dq bit (Dq string) of the process' record to ``Dequeued''.
3. Perform a compare and swap of the predecessor record's next pointer with the process' next pointer.
If the Compare&swap fails, go to 1. If the Compare&swap succeeds, return with a "lock released"
value.
the value of the process's successor. If the process removes itself from the queue without
obtaining the lock, the Compare&swap is the decisive instruction. If the Compare&swap fails, the predecessor
might have released the lock, or third process has enqueued itself as the predecessor. The process can't
distinguish between these possibilities, so it must re-search the lock queue.
5 Simulation Results
We simulated the execution of the PR-lock algorithm in PROTEUS, which is a configurable multiprocessor
simulator [4]. We also implemented the MCS-lock and Markatos' lock to demonstrate the difference in the
acquisition and release time characteristics.
In the simulation, we use a multiprocessor model with eight processors and a global shared memory.
Each processor has a local cache memory of 2048 bytes size. In PROTEUS, the units of execution time
are cycles. Each process executes for a uniformly randomly distributed time, in the range 1 to 35 cycles,
before it issues an acquire-lock request. After acquiring the lock, the process stays in the critical section
for a fixed number of cycles (150) plus another uniformly randomly distributed number (1 to 400) of cycles
before releasing the lock. This procedure is repeated fifty times. The average number of cycles taken to
acquire a lock by a process is then computed. PROTEUS simulates parallelism by repeatedly executing a
processor's program for a time quanta, Q. In our simulations, 10. The priority of a process is set equal
to the process/processor number and the lower the number, the higher the priority of a process.
Figures
13 and 14 show the average time taken for a process to acquire a lock using the MCS-lock
algorithm and the PR-lock algorithm, respectively. A process using MCS-lock algorithm has to wait in the
FIFO queue for all other processes in every round. However, a process using the PR-lock algorithm will
wait for a time that is proportional to the number of higher priority processes. As an example, the highest
and second highest priority process on the average waits for about one critical section period. We note that
the two highest priority processes have about the same acquire lock execution time because they alternate
in acquiring the lock. Only after both of these processes have completed their execution can the third and
fourth highest priority processes obtain the lock. Figure 14 clearly demonstrates that the average acquisition
time for a lock using PR-lock is proportional to the process priorities, whereas the average acquisition time is
proportional to the number of processes in case of the MCS-lock algorithm. This feature makes the PR-lock
algorithm attractive for use in real time systems.
In
Figure
15, we show the average time taken for a process to acquire the lock using Markatos' algorithm.
The same prioritized lock-acquisition behavior is shown, but the average time to acquire a lock is 50% greater
than when the PR-lock is used. At first this result is puzzling, because Markatos' lock performs the majority
of its work when the lock is released and the PR-lock performs its work when the lock is acquired. However,
the time to release a lock is part of the time spent in the critical section, and the time to acquire a lock
depends primarily on time spent in the critical section by the preceding lock holders. Thus, the PR-lock
allows much faster access to the critical section. As we will see, the PR-lock also allows more predictable
access to the critical section.
Figure
shows the cache hit ratio at each instance of time on all the processors. Most of the time the
cache-hit ratio is 95% or higher on each of the processors, and we found an average cache hit rage of 99.72%
to 99.87%. Thus, the PR-lock generates very little network or memory contention in spite of the processes
using busy-waiting.
Finally, we compared the time required to release a lock using both the PR-lock and Markatos' lock. These
results are shown in Figure 17 and for Markatos' lock in Figure 18. The time to release a lock using PR-lock
is small, and is consistent for all of the processes. Releasing a lock using Markatos' lock requires significantly
more time. Furthermore, in our experiments a high priority process is required to spend significantly more
Average Time(Cycles) x 100
Processor(Priority)
Figure
13: Lock acquisition time for the MCS-lock Algorithm
time releasing a lock than is required for a low priority process. This behavior is a result of the way that
the simulation was run. When high priority processes are executing, all low priority processes are blocked in
the queue. As a result, many records must be searched when a high priority process releases a lock. Thus,
a high priority process does work on behalf of low priority processes. The time required for a high priority
process to release its lock depends on the number of blocked processes in the queue. The result is a long
and unpredictable amount of time required to release a lock. Since the lock must be released before the next
process can acquire the lock, the time required to acquire a lock is also made long and unpredictable.
6 Conclusion
In this paper, we present a priority spin-lock synchronization algorithm, the PR-lock, which is suitable for
real-time shared-memory multiprocessors. The PR-lock algorithm is characterized by a prioritized lock ac-
quisition, a low release overhead, very little bus-contention, and well-defined semantics. Simulation results
show that the PR-lock algorithm performs well in practice. This priority lock algorithm can be used as
presented for mutually exclusive access to a critical section or can be used to provide higher level synchronization
constructs such as prioritized semaphores and monitors. The PR-lock maintains a pointer to the
record of the lock holder, so the PR-lock can be used to implement priority inheritance protocols. Finally,
the PR-lock algorithm can be adapted for use as a single-dequeuer, multiple-enqueuer parallel priority queue.
Average Time(Cycles) x 100
Processor(Priority)
28 3213579
Figure
14: Lock acquisition time for the PR-lock Algorithm
Average Time(Cycles) x 100
Processor(Priority)
Figure
15: Lock acquisition time for the Markatos' Algorithm
Time x 10000
Processor
Figure
Cache hit ratio for the PR-lock Algorithm
Average Time(Cycles)
Processor(Priority)
Figure
17: Lock release time for the PR-Lock Algorithm
Average
Processor(Priority)
Figure
18: Lock release time for Markatos' Algorithm
While several prioritized spin locks have been proposed, the PR-lock has the following advantages:
ffl The algorithm is contention free.
ffl A higher priority process does not have to work for a lower priority process while releasing a lock. As
a result, the time required to acquire and release a lock is fast and predictable.
ffl The PR-lock has a well-defined acquire-lock point.
ffl The PR-lock maintains a pointer to the process using the lock that facilitates implementing priority
inheritance protocols.
For future work, we are interested in prioritizing access to other operating system structures to make
them more appropriate for use in a real-time parallel operating system.
--R
The performance of spin lock alternatives for shared memory multiprocessors.
Concurrent Programming Principles and Practice.
Mutual exclusion with linear waiting using binary shared variables.
Distributed snapshots: Determining global states of distributed systems.
Dynamic priority ceiling: A concurrency control protocol for real-time systems
A priority ceiling protocol for multiple-instance resources
Scheduling algorithms for multiprogramming in a hard real-time environ- ment
Queuing spin lock alternatives to support timing predictability.
Characterizing memory hotspots in a shared memory mimd machine.
Two algorithms for mutual exclusion in real-time distributed computer systems
A methodology for implementing highly concurrent data objects.
A correctness condition for concurrent objects.
A performance analysis of minimum laxity and earliest deadline in a real-time system
Efficient synchronization on multiprocessors with shared memory.
Multiprocessor synchronization primitives with priorities.
Algorithms for scalable synchronization on shared-memory multiprocessors
Predictable synchronization mechanisms for real-time systems
Priority inheritance protocols: An approach to real-time synchronization
Concurrent search structure algorithms.
Tutorial Hard Real-Time Systems
Locking without blocking: Making lock based concurrent data structure algorithms nonblocking.
--TR
--CTR
Prasad Jayanti, f-arrays: implementation and applications, Proceedings of the twenty-first annual symposium on Principles of distributed computing, July 21-24, 2002, Monterey, California
James H. Anderson , Yong-Jik Kim , Ted Herman, Shared-memory mutual exclusion: major research trends since 1986, Distributed Computing, v.16 n.2-3, p.75-110, September | spin lock;priority queue;mutual exclusion;parallel processing;real-time system |
264530 | An Optimal Algorithm for the Angle-Restricted All Nearest Neighbor Problem on the Reconfigurable Mesh, with Applications. | AbstractGiven a set S of n points in the plane and two directions $r_1$ and $r_2,$ the Angle-Restricted All Nearest Neighbor problem (ARANN, for short) asks to compute, for every point p in S, the nearest point in S lying in the planar region bounded by two rays in the directions $r_1$ and $r_2$ emanating from p. The ARANN problem generalizes the well-known ANN problem and finds applications to pattern recognition, image processing, and computational morphology. Our main contribution is to present an algorithm that solves an instance of size n of the ARANN problem in O(1) time on a reconfigurable mesh of size nn. Our algorithm is optimal in the sense that $\Omega\;(n^2)$ processors are necessary to solve the ARANN problem in O(1) time. By using our ARANN algorithm, we can provide O(1) time solutions to the tasks of constructing the Geographic Neighborhood Graph and the Relative Neighborhood Graph of n points in the plane on a reconfigurable mesh of size nn. We also show that, on a somewhat stronger reconfigurable mesh of size $n\times n^2,$ the Euclidean Minimum Spanning Tree of n points can be computed in O(1) time. | Introduction
Recently, in an effort to enhance both its power and flexibility, the mesh-connected architecture
has been endowed with various reconfigurable features. Examples include the bus
automaton [21, 22], the reconfigurable mesh [15], the mesh with bypass capability [8], the
content addressable array processor [29], the reconfigurable network [2], the polymorphic processor
array [13, 14], the reconfigurable bus with shift switching [11], the gated-connection
network [23, 24], and the polymorphic torus [9, 10]. Among these, the reconfigurable mesh
has emerged as a very attractive and versatile architecture.
In essence, a reconfigurable mesh (RM) consists of a mesh augmented by the addition
of a dynamic bus system whose configuration changes in response to computational and
communication needs. More precisely, a RM of size n \Theta m consists of nm identical SIMD
processors positioned on a rectangular array with n rows and m columns. As usual, it is
assumed that every processor knows its own coordinates within the mesh: we let P (i;
denote the processor placed in row i and column j, with P (1; 1) in the north-west corner of
the mesh.
Each processor P (i; j) is connected to its four neighbors P (i \Gamma
and P exist, and has 4 ports denoted by N, S, E, and W in Figure 1.
Local connections between these ports can be established, under program control, creating
a powerful bus system that changes dynamically to accommodate various computational
needs. We assume that the setting of local connection is destructive in the sense that setting
a new pattern of connections destroys the previous one.
Most of the results in this paper assume a model that allows at most two connections
to be set in each processor at any one time. Furthermore, these two connections must
involve disjoint pairs of ports as illustrated in Figure 2. Some other models proposed in
the literature allow more than two connections to be set in every processor [9, 10]. One of
our results uses such a model. In accord with other workers [9, 10, 13-16, 21] we assume
that communications along buses take O(1) time. Although inexact, recent experiments
with the YUPPIE and the GCN reconfigurable multiprocessor system [16, 23, 24] seem to
indicate that this is a reasonable working hypothesis. It is worth mentioning that at least
Figure
1: A reconfigurable mesh of size 4 \Theta 5
Figure
2: Examples of allowed connections and corresponding buses
two VLSI implementations have been performed to demonstrate the feasibility and benefits
of the two-dimensional reconfigurable mesh: one is the YUPPIE (Yorktown Ultra-Parallel
Polymorphic Image Engine) chip [9, 10, 16] and the other is the GCN (Gated-Connection
Network) chip [23, 24]. These two implementations suggested that the broadcast delay,
although not constant, is very small. For example, only 16 machine cycles are required to
broadcast on a 10 6 -processor YUPPIE. The GCN has further shortened the delay by adopting
pre-charged circuits. Newer developments seem to suggest the feasibility of implementations
involving the emerging optical technology.
One of the fundamental features that contributes to a perceptionally relevant description
useful in shape analysis is the distance properties among points in a planar set. In
this context, nearest- and furthest-neighbor computations are central to pattern recognition
classification techniques, image processing, computer graphics, and computational morphology
[4, 20, 26, 27]. In image processing, for example, proximity is a simple and important
metric for potential similarities of objects in the image space. In pattern recognition, the
same concept appears in clustering, and computing similarities between sets [4]. In mor-
phology, closeness is often a valuable tool in devising efficient algorithms for a number of
seemingly unrelated problems [26].
A classic problem in this domain involves computing for every point in a given set S, a
point that is closest to it: this problem is known as the All-Nearest Neighbor problem (ANN,
for short) and has been well studied in both sequential and parallel [1, 4, 20, 26]. Recently,
Jang and Prasanna [5] provided an O(1) time algorithm for solving the ANN problem for n
points in the plane on a RM of size n \Theta n.
In this paper we address a generalization of the ANN problem, namely the the Angle-
Restricted All Nearest Neighbor problem (ARANN, for short). Just as the ANN problem,
the ARANN problem has wide-ranging applications in pattern recognition, image processing,
and morphology.
For points p and q in the plane, we let d(p; q) stand for the Euclidean distance between
p and q. Further, we say that q is (r )-dominated by p if q lies inside of the closed planar
region determined by two rays in directions r 1 and r 2 emanating from p. In this terminology,
a point q in S is said to be the (r )-nearest neighbor of p if q is (r )-dominated by p
and )-dominated by pg. The ARANN problem
involves determining the (r )-nearest neighbor of every point in S.
Refer to Figure 3 for an illustration. Here, p 2 is (0; -=3)-dominated by p 1 , but is not (0; -=3)-
dominated by p 4 . The (0; -=3)-nearest neighbor of p 1 is p 2 .
A class of related problems in pattern recognition and morphology involves associating
a certain graph with the set S. This graph is, of course, application-specific. For example,
in pattern recognition one is interested in the Euclidean Minimum Spanning Tree of S, the
Relative Neighborhood Graph of S, the Geographic Neighborhood Graph, the Symmetric
Furthest Neighbor Graph, the Gabriel Graph of S, and the Delaunay Graph of S, to name
a few [20, 25-27].
The Euclidean Minimum Spanning Tree of S, denoted by EMST(S), is the minimum
Figure
3: Illustrating (0; -)-domination and the corresponding ARANN graph
Figure
4: Lune of p and q
spanning tree of the weighted graph with vertices S and with the edges weighted by the
corresponding Euclidean distance. In other words, the edge-set is
Sg where (p; q; d(p; q)) is the edge connecting points p and q having weight d(p; q).
The Relative Neighborhood Graph, RNG(S) of a set S of points in the plane has been
introduced by Toussaint [26] in an effort to capture many perceptually relevant features of
the set S. Specifically, given a set S of points in the plane, RNG(S) has for vertices the points
of S together with an edge between p and q whenever d(p; q) - max s2S fd(p; s),d(q; s)g. An
equivalent definition states that two vertices p, q are joined by an edge in RNG(S) if no
other points of S lie inside LUNE(p; q), the lune of p, q defined as the set of points in the
GNG
Figure
5: Illustrating EMST, RNG, GNG, and GNG 1
plane enclosed in the region determined by two disks with radius d(p; q) centered at p and
q, respectively. Refer to Figure 4 for an illustration.
be the undirected graph with vertex-set S and with edge-set
f(p; q)jq is the i-=3)-nearest neighbor of pg. The GNG of S, denoted GNG(S),
is the graph with vertex-set S and whose edges are [ 1-i-6 E i . Refer to Figure 5 for an
example of the concepts defined above.
Given set S of n points in the plane and two directions r 1 and r 2 , the Angle-Restricted
All Nearest Neighbor graph of S, denoted ARANN(S) is the directed graph whose vertices
are the points in S; the points p and q are linked by a directed edge from p to q whenever
q is the (r )-nearest neighbor of p. The reader will not fail to note that the problem of
computing the (r )-closest neighbor of each point in S and the problem of computing the
graph ARANN(S) of S are intimately related, in the sense that the solution to either of
them immediately yields a solution to the other. For this reason in the remaining part of
this work we shall focus on the problem of computing the graph ARANN(S) and we shall
refer to this task, informally, as solving the ARANN problem.
Referring again to Figure 3, the corresponding ARANN graph contains the directed edges
is the (0; -)-closest neighbor of p 1 and p 3 is the (0; -)-closest
neighbor of p 2 . The points p 3 and p 4 have no (0; -)-closest neighbor and so, in the ARANN
graph they show up as isolated vertices.
Several sequential algorithms for the computing the EMST, the GNG, the ARANN, and
the RNS graphs of a set of points have been proposed in the literature [3, 7, 25, 26]. In
particular, [3] has shown that the ARANN graph of a set of n points in the plane can be
computed sequentially in O(n log n) time.
The main contribution of this work is to present an algorithm to compute the ARANN
graph of n points in O(1) time on a RM of size n \Theta n. We also show that our algorithm is
optimal in the sense that n 2 processors are necessary to compute the ARANN of n points in
O(1) time. It is not hard to see that the ANN problem is easier than the ARANN, because
the ANN corresponds to the computation of ARANN for the particular directions 0 and 2-.
Our second main contribution is to extend our ARANN algorithm to solve in O(1) time the
problems of computing the Geographic Neighborhood Graph, the Relative Neighborhood
Graph, and the Euclidean Minimum Spanning Tree of a set S of n points in the plane.
As we already mentioned, Jang and Prasanna [5] have shown that the All Nearest Neighbor
problem, of a set S of n points in the plane, can be solved in O(1) time on a RM of size
n \Theta n. The key idea of the algorithm in [5] is as follows: In the first stage, the points are
partitioned into n 1=4 horizontal groups each of n 3=4 points by n 1=4 \Gamma 1 horizontal lines and
into n 1=4 vertical groups of n 3=4 points by n vertical lines. For each point, the nearest
neighbor over the all points that are in the same horizontal or vertical group is retained as
a candidate for the nearest neighbor over the whole set of points. Having computed the set
of candidates, the second stage of the algorithm in [5] uses the fact that the candidates of at
most 8
n points are not the correct nearest neighbors over all the points. So, by computing
the nearest neighbor of these exceptional 8
n points, the ANN problem can be solved. If the
angle is restricted, then this algorithm does not work, because it is possible that none of the
candidates retained in stage 1 is the actual angle restricted nearest neighbor. This situation
is depicted in Figure 6. We will develop new tools for dealing with the ARANN problem.
These tools are interesting in their own right and may be of import in the resolution of other
related problems.
At the same time, it is clear that by using the ARANN algorithm, the ANN problem and
the task of computing the GNG can be solved in O(1) time on an n \Theta n RM. Furthermore, the
RNG can be computed in O(1) time on an n \Theta n RM and the EMST can be computed in O(1)
time on an n \Theta n 2 RM. These algorithms are based on the fact that the RNG is a subgraph of
GNG and EMST is a subgraph of RNG [25,26]. In Section 2 we demonstrate a lower bound
on the size of a reconfigurable mesh necessary to compute the ARANN, GNG, RNG, and
Figure
None of the candidates of stage 1 are the true (0; -)-nearest neighbors
EMST in O(1) time. Section 3 presents basic algorithms used by our ARANN algorithm.
Section 4 presents our optimal ARANN algorithm and Section 5 presents algorithms for
the GNG, RNG, and EMST. Finally, Section 6 offers concluding remarks and poses open
problems.
Let us consider the ARANN problem for the directions \Gamma-=2 and -=2. Consider a set
of points on the x-axis such that point (a
is assigned to the i-th column of an RM of size m \Theta n. After the computation of the ARANN,
the processors of the i-th column know the (\Gamma-=2; -=2)-nearest neighbor of (a
that a 1 ! a n=2 ! a for each point
-=2)-nearest neighbor is the point (a i+n=2 ; 0). Therefore,
information about n=2 points (a n=2 ; 0); (a n=2+1 must be transferred through
the m links that connect the (n=2 \Gamma 1)-th column and the n=2-th column of the RM. Hence,
time is required to solve the ARANN problem. Therefore, we have the following
result.
Theorem 2.1
processors are necessary to solve an instance of size n of the ARANN
problem on the RM in O(1) time.
Since the proof above can be applied to the GNG, RNG, and EMST, we have
Corollary 2.2
processors are necessary to compute the GNG, RNG, and the EMST
of n points in O(1) time on a RM.
3 Basic Algorithms
This section reviews basic computational results on reconfigurable meshes that will be used
in our subsequent algorithms. Recently, Lin et al. [12], Ben-Asher et al. [2], Jang and
Prasanna [6], and Nigam and Sahni [17] have proved variants of the following result.
Lemma 3.1 A set of n items stored one per processor in one row or one column of a
reconfigurable mesh of size n \Theta n can be sorted in O(1) time.
For a sequence of n numbers, its prefix-maxima is a sequence
gg. The prefix-minima can be defined
in the same way. For the prefix-maxima and prefix-minima, we have
Lemma 3.2 Given a sequence of n numbers stored one per processor in one row of a reconfigurable
mesh of size n \Theta n ffl , its prefix-maxima (resp. prefix-minima) can be computed in
time for every fixed ffl ? 0.
Proof. Olariu et al. [18] have shown to compute the maximum of n items in O(1) time on
reconfigurable mesh of size n \Theta n. Essentially, if we assign n processors to each number, we
can determine if there is a number larger than it. If such a number is not found, it is the
maximum. Since an n \Theta n RM can find the maximum of n elements in O(1) time, an easy
extension shows that an n \Theta n 2 RM can compute the prefix-maxima of n numbers in O(1)
time.
Based on this idea, we can devise an O(1) time algorithm for an n \Theta n ffl RM. Assume that
each number is assigned to a row of the platform. Partition the sequence
of n numbers into n ffl=2 sequences A 1 ; A ffl=2 each of which contains n 1\Gammaffl=2 numbers.
Next, compute the (local) prefix-maxima of each A i (1 - i - n ffl=2 ) on an n 1\Gammaffl=2 \Theta n ffl submesh
recursively. Let be the maximum within A i . Further, compute the (global) prefix-
maxima of a sequence )g. This can be done in O(1) time
by the O(1) time algorithm discussed above. Finally, for each number a j (2 A i ), compute the
maximum of its local prefix-maximum and the global prefix-maximum maxfA
that corresponds to the prefix-maximum of a j . Since the depth of the recursion is O(1=ffl),
the maximum can be computed in O(1) time. 2
The prefix-sums of n binary values can be computed in a similar fashion. The reader is
referred to [19] for the details.
Lemma 3.3 For every fixed ffl ? 0, the prefix-sums
of a binary sequence can be computed in O(1) time on a
reconfigurable mesh of size n \Theta n ffl .
Our next result assumes a reconfigurable mesh wherein a processor can connect, or fuse,
an arbitrary number of ports [28]. On this platform, we will show basic graph algorithms that
are essentially the same as those previously presented [28]. However, the number of processors
is reduced by careful implementation of the algorithms to the RM. A graph E)
consists of a set V of n vertices and an edge-set
e edges where u E) is said to be numbered if the
vertex set is ng. A graph E) is weighted if the edge-set is
positive number. The reachability
problem [28] for a vertex u of G involves determining all the vertices of G that can be reached
by a path from u. Here, u is referred to as the source.
Lemma 3.4 Given a numbered graph E) and a node u 2 V , the single source
reachability problem can be solved in O(1) time on an n \Theta e RM, if each edge is assigned to
a column of the RM.
Proof. Let n), be the edge assigned to the i-th column of a RM of
size n \Theta e. For each i, P connect their four ports into one. All the other
Figure
7: Bus configuration for the reachability problem
processors connect their E and W ports as well as their N and S ports in pairs.
We note that the bus configuration thus obtained corresponds to the graph in the sense
that the horizontal buses in row u i and row v i row are connected through the vertical bus in
column i, as illustrated in Figure 7.
Next, processor P (u; 1) sends a signal from its E port and every processor P (v; 1) reads
its E port. It is not hard to see that vertex v is reachable from u if and only if processor
(v; 1) has received the signal sent by P (u; 1). Therefore, the reachability problem can be
solved in O(1) time on an n \Theta e reconfigurable mesh that allows all ports to be fused together.By using the single source reachability algorithm, we have the following lemma.
Lemma 3.5 Given a numbered weighted graph G with n vertices and e edges, its Minimum
Spanning Tree can be computed in O(1) time on a reconfigurable mesh of size e \Theta ne provided
that every processor can fuse all its ports together.
Proof. For each i, (1 - i - e), let G be the graph such that
lexicographically larger
than (w for each i, it is determined whether u i and v i are reachable in the graph
G i . If so, then not an edge of MST, otherwise it is an MST edge. By Lemma
3.4 the reachability can be determined in O(1) time on an e \Theta n RM, so the MST edges can
be determined in O(1) time on an e \Theta ne RM, as claimed. 2
4 An Optimal Algorithm for the ARANN Problem
Consider a collection S of n points in the plane and directions r 1 and r 2 with
2-. The main goal of this section is to present an optimal O(1) time algorithm for computing
the corresponding graph ARANN(S).
We begin our discussion by pointing out a trivial suboptimal solution to the problem at
hand.
Lemma 4.1 For every fixed ffl ? 0, the task of solving an arbitrary instance of size n of the
ARANN problem can be performed in O(1) time on a RM of size n \Theta n 1+ffl .
Proof. Partition the RM into n submeshes of size n \Theta n ffl and assign each submesh to a
point. Each point p find, in its own submesh, the nearest neighbor q over all the points
)-dominated by p, and report the edge (p; q) as an edge of ARANN(S). We note that
the task of finding the nearest neighbor of every point in S can be seen as an instance of the
(prefix) minimum problem and can be solved in O(1) time by the algorithm of Lemma 3.2. 2
In the remainder of this section we will show how to improve this naive algorithm to
run in O(1)-time on a RM of size n \Theta n. First, assume that the given directions are 0
and r, with that is, the angle of the closed region is acute. Consider a set
of points in the plane stored one per processor in the first row of a RM
of size n \Theta n such that for all i, (1 - i - n), processor P (1; i) stores point p i . The details of
our algorithm are spelled out as follows.
Step 1. Sort the points in S by y-coordinate and partition them into n 1=3 subsets Y
of n 2=3 points each, such that the y-coordinate of all points in Y i is smaller than the
y-coordinate of all points in Y
Step 2. For each point p in S compute x 0 and sort the points by
are x- and y-coordinate of p. Next, partition the points into
Figure
8: Partitioning into X's and Y 's
subsets points each, such that for every choice of points p in
Step 3. For each point p in X i , r)-nearest neighbor X(p) over all
points in X
Step 4. For each point p in Y i , r)-nearest neighbor Y (p) over all
points in Y
Step 5. For each i and j, (1 - For each point
its (0; r)-nearest neighbor Z(p) over all points in Z i;j ;
Step 6. For each point p, find the closest of the three points X(p), Y (p), and Z(p), and
return it as the (0; r)-nearest neighbor of p.
We refer the reader to Figure 8 for an example, where
(Actually, 4, but this inconsistency is nonessential.)
Our next goal is to show that the algorithm we just presented can be implemented to
run in O(1) time on a RM of size n \Theta n. By virtue of Lemma 3.1, Steps 1 and 2 can be
completed in O(1) time on a RM of size n \Theta n. In Step 3, a submesh of size n \Theta n 2=3 can
Minima(P )-=3
Figure
9: Illustrating Maxima(S) and Minima(S) for directions 0 and -=3
be assigned to each X i . Consequently, Step 3 can be completed in O(1) time by the naive
algorithm of Lemma 4.1. In the same way, Step 4 can be implemented to run in O(1) time.
Step 6 involves only local computation and can be performed, in the obvious way, in O(1)
time.
The remainder of this section is devoted to showing that with a careful implementation
Step 5 will run in O(1) time. We shall begin by presenting a few technical results that are
key in understanding why our implementation works.
Consider, as before, a set of points. A point p i in S is a (0; r)-maximal
(resp. minimal) point of S if p i is not (0; r)-dominated by (resp. does not (0; r)-dominate)
any other point in S. We shall use Maxima(S) (resp. Minima(S)) to denote the chain of
all maximal points in S specified in counter-clockwise order (resp. all minimal points in
clockwise order). These concepts are illustrated in Figure 9 for the directions 0 and -=3.
Next, we propose to show that only points in Maxima(X may have their (0; r)-
nearest neighbor in Z i;j . Moreover, if the (0; r)-nearest neighbor of a point in Maxima(X i "Y j )
lies in Z i;j , then it can only be in Minima(Z i;j ).
Lemma 4.2 For three points direction r (0
Proof. Consider the triangle let u be the point where the edge 3 of this
triangle cuts one of the rays in the directions 0 or r emanating from p 2 . We shall assume,
without loss of generality, that the point u lies on the ray in the direction 0.
must be larger than
-=2. This implies that that the angle (which is larger that the angle
larger than -=2. Hence,
Note that Lemma 4.2 does not hold for a direction r larger than -=2. This is the reason we
restricted the angle r to be less than -=2.
Consider two sets of points P and Q such that all points in P (0; r)-dominate all the
points in Q. Let ARANN r (P; Q) be a set of edges (p; q) such that p belongs to P , q belongs
to Q, and q is the (0; r)-nearest neighbor of p. We have the following lemma.
4.3 No two edges in ARANN r (P; Q) intersect.
Proof. Suppose not: some two edges (p; q) and (p intersect at a
point u. ?From the triangle inequality applied to the triangles puq 0 and p 0 uq we have
But now, (1) implies that either must hold. However,
this contradicts the assumption that both (p; q) and (p are edges in ARANN r
Hence, the graph ARANN r (P; Q) is planar.
Lemma 4.4 If (p;
Proof. If p does not belong to Minima(P ), then there exists a point p 0 in Minima(P ) that
dominates q. But now, Lemma 4.2 guarantees that a contradiction. In
case q does not belong to Maxima(Q) we can show a contradiction in an essentially similar
fashion. 2
Lemma 4.4 has the following important consequence.
Corollary 4.5 ARANN r
Corollary 4.5 guarantees that in order to compute ARANN r (P; Q) examining all the
pairs of points p in P and q in Q is not necessary: all that is needed is to compute
ARANN r (Minima(P ); Maxima(Q)).
To pursue this idea further, write Minima(P
and assume that for some fixed ffl, (0
motivates the following approach to compute ARANN r (Minima(P ); Maxima(Q)).
Let sample(Minima(P )) be a subset of n ffl=2 points fp n ffl=2 of Minima(P )
and partition Minima(P ) into n ffl=2 chains in such a way that
and
ffl for every k, g.
Refer to Figure 10 for an illustration. For each k (1
(2 Q) be the
(0; r)-nearest neighbor of p kn ffl=2 over all Q.
Observe that the points q j k
thus defined induce a partition of the set Q into n ffl=2 chains
ffl=2 such that
and
ffl for every k,
g.
We note that q j k
4.3 guarantees that in order to compute the (0; r)-
nearest neighbor of a point p in P k with respect to Maxima(Q), we can restrict ourselves
to computing the (0; r)-nearest neighbor of p over Q k . In other words, ARANN r
ARANN r (P k ; Maxima(Q)). Therefore, the task of computing ARANN r (Minima(P ); Maxima(Q))
reduces to that of computing ARANN r
The sampling strategy outlined above leads to the following algorithm for computing
ARANN r (Minima(P ); Maxima(Q)) in O(1) time on a RM of size n \Theta n ffl , for some fixed
We assume that each point in Minima(P ) has been assigned to one column and
each point in Maxima(Q) has been assigned to one row of the RM.
Partition columnwise the original RM of size n \Theta n ffl into n ffl=2 submeshes of size
n \Theta n ffl=2 each. In the k-th such submesh, 1 - k - n ffl=2 , compute d(p kn ffl=2 ; q) for all
points q in Maxima(Q);
Step 2 For every k, (1 - k - n ffl=2 ), use the k-th submesh to compute
Figure
10: Partitioning Minima(P ) and Maxima(Q)
ffl the (0; r)-nearest neighbor q j k
of p kn ffl=2
by finding the smallest of the distances
computed above;
ffl using the point q
determine Q k .
If then the (0; r)-nearest neighbor of every point p in P k is precisely q j k
, and
thus
We shall, therefore, assume that jQ k j - 2
for all k;
Step 3 Partition the given RM of size n\Thetan ffl rowwise into n ffl=2 submeshes as follows. For each
the k-th submesh has size (jQ k
through j k of the original mesh. In the k-th submesh compute ARANN r
follows:
Step 3.1 Partition the k-th submesh of size (jQ k submeshes each
of size (jQ k Assign each point p in P k to each such submesh, and
compute d(p; q) for each point q in Q k .
Step 3.2 By using the algorithm of Lemma 3.2, compute the minimum of all d(p; q)
over all q in Q k in each submesh assigned to p and return the (0; r)-nearest
neighbor of p.
The reader should have no difficulty to confirm that Steps 1 and 2 can be performed in
constant time. By using the prefix-sums algorithm of Lemma 3.3, the partitioning in Step 3
can be performed in constant time. Steps 3.1 and 3.2 can also be implemented to run in
constant time. To summarize, we have proved the following result.
Lemma 4.6 If Minima(P ) and Maxima(Q) have been assigned to the columns and the rows,
respectively, of a RM of size n \Theta n ffl , then ARANN r (Minima(P ); Maxima(Q)) can be computed
in O(1) time for every fixed
We are now in a position to discuss an O(1) time implementation of Step 5 that computes
Z(p) for all p. For simplicity, assume that the RM has n \Theta 2n processors.
Step 5.1 Partition the n \Theta 2n RM columnwise into n 2=3 submeshes. For
be of size n \Theta (jX
Step 5.2 In each submesh R(i;
Step 5.3 In each submesh R(i;
Step 5.4 In each submesh R(i;
i;j )). If
Step 5.1 is complicated, because the number of columns of each submesh is different.
The partitioning specified in Step 5.1 can be completed in O(1) time by using the sorting
algorithm of Lemma 3.1: sort the n points in lexicographical order of (x 0 (p); y(p)). Clearly,
for each i and j, all points in X are consecutive in the sorted points. If the smallest
point in and the largest one are s-th and t-th in the sorted order, then the submesh
assigned columns s
Step 5.1 can be completed in O(1) time. Steps 5.2 can be completed as follows: Let
k. Compute the postfix-maxima of
g, by the algorithm of Lemma 3.2. Then,
and only if maxfy(p k+1 ); y(p k+2 ); y(p k+3 can be
computed in O(1) time. Step 5.3 can be completed in the same way. To apply the algorithm
of Lemma 4.6 to Step 5.4, a serial number must be assigned to the points in Minima(X
and those in Minima(Z 0
i;j ). These numbering can be obtained in the obvious way by using
the prefix-sum algorithm of Lemma 3.3. Then, by executing the algorithm of Lemma 4.6,
Step 5.4, can be completed in O(1) time. Therefore, ARANN r (S) can be computed in O(1)
time on an n \Theta 2n RM.
Since the algorithm above that uses n \Theta 2n processors, can be implemented on an n \Theta n
RM by a simple scheduling technique, ARANN r (S) can also be computed in O(1) time on
an n \Theta n RM. Furthermore, the ARANN r (S) for r -=2 can be computed by partition
the angle into several acute angles. For example, the ARANN 2-=3 (S) can be computed as
follows:
1. compute ARANN -=2 (S),
2. rotate all the points in S by an angle of -=2 clockwise about the origin,
3. compute ARANN -=6 (S), and
4. for each point, determined the nearest of the two points computed in 1 and 2 that
corresponds to the nearest point for ARANN 2-=3 (S).
Therefore, we have proved the following result.
Theorem 4.7 Given an arbitrary set of n points in the plane and a direction r, (0 ! r -
2-), the corresponding instance of the ARANN problem can be solved in O(1) time on a
reconfigurable mesh of size n \Theta n.
5 Application to Proximity Problems
The goal of this section is to show that the result of Theorem 4.7 leads to O(1) time algorithms
for the GNG and RNG and the EMST.
To begin, from Theorem 4.7, can be computed in O(1) time on
an n \Theta n RM. Therefore, we have
Corollary 5.1 Given n points in the plane, its GNG can be computed in O(1) time on a
reconfigurable mesh of size n \Theta n.
Since each GNG i is planar, GNG i has at most 3n \Gamma 6 edges and thus GNG has at most
edges. Furthermore, RNG is a subgraph of GNG [7]. Therefore, we have
Theorem 5.2 Given n points in the plane, the corresponding RNG can be computed in O(1)
time on an n \Theta n RM.
Proof. For each edge in GNG, check whether there is a point in its lune. If such a point
does not exist, this edge is an RNG edge, and vice versa. This checking can be done in O(1)
time by n processors for each edge. Since the GNG of n points has at most 18n \Gamma 36 edges,
the RNG can be computed in O(1) time on an n \Theta n RM. 2
Furthermore, by using the MST algorithm for Lemma 3.5, we have the following theorem.
Theorem 5.3 Given n points in the plane, its EMST can be computed in O(1) time on a
reconfigurable mesh of size n \Theta n 2 .
Proof. By applying the MST algorithm for a graph to the RNG, the EMST of n points
can be computed, because EMST is a subgraph of RNG. Since the RNG has at most 3n \Gamma 6
edges, an n \Theta n 2 RM is sufficient to compute EMST in O(1) time. 2
6 Concluding Remarks
We have shown an optimal algorithm on a reconfigurable mesh for computing the Angle
Restricted All Nearest Neighbor problem. By using this algorithm, we have also shown
optimal algorithms on a reconfigurable mesh for computing the Geographical Neighborhood
Graph and the Relative Neighborhood Graph. These algorithms are optimal in the sense
that there is no O(1)-time algorithm that solves instances of size n of these problems on an
reconfigurable mesh. Furthermore, we have also shown that the Euclidean
Minimum Spanning Tree of a set of n points in the plane can be computed in O(1) time on
an n \Theta n 2 reconfigurable mesh, provided that every processor can fuse its ports. It remains
open to find an O(1)-time EMST algorithm on an n \Theta n reconfigurable mesh that matches
the lower bound.
--R
Parallel Computational Geometry
The power of reconfiguration
Voronoi diagrams based on convex functions
Pattern Classification and Scene Analysis
Parallel geometric problems on the reconfigurable mesh
An optimal sorting algorithm on reconfigurable meshes
Computing relative neighborhood graphs in the plane
IEEE Transactions on Computers
Reconfigurable buses with shift switching - concepts and appli- cations
Sorting in O(1) time on a reconfigurable mesh of size N
IEEE Transactions on Parallel and Distributed Systems
Hardware support for fast reconfigurability in processor arrays
Parallel Computations on Reconfigurable Meshes
Connection autonomy in SIMD computers: a VLSI implemen- tation
Sorting n numbers on n
Fundamental Data Movement International Journal of High Speed Computing
Fundamental Algorithms on Reconfigurable Meshes
Computational Geometry - An Introduction
On the ultimate limitations of parallel processing
Bus automata
bit serial associate processor
The gated interconnection network for dynamic programming
the relative neighborhood graph with an application to minimum spanning trees
The relative neighborhood graph of a finite planar set
The symmetric all-furthest neighbor problem
Constant time algorithms for the transitive closure problem and its applications IEEE Transactions on Parallel and Distributed Systems
The image understanding architecture
--TR
--CTR
Hongga Li , Hua Lu , Bo Huang , Zhiyong Huang, Two ellipse-based pruning methods for group nearest neighbor queries, Proceedings of the 13th annual ACM international workshop on Geographic information systems, November 04-05, 2005, Bremen, Germany
Ramachandran Vaidyanathan , Jerry L. Trahan , Chun-ming Lu, Degree of scalability: scalable reconfigurable mesh algorithms for multiple addition and matrix-vector multiplication, Parallel Computing, v.29 n.1, p.95-109, January
Dimitris Papadias , Yufei Tao , Kyriakos Mouratidis , Chun Kit Hui, Aggregate nearest neighbor queries in spatial databases, ACM Transactions on Database Systems (TODS), v.30 n.2, p.529-576, June 2005 | reconfigurable mesh;mobile computing;ARANN;proximity problems;lower bounds;ANN |
264559 | Optimal Registration of Object Views Using Range Data. | AbstractThis paper deals with robust registration of object views in the presence of uncertainties and noise in depth data. Errors in registration of multiple views of a 3D object severely affect view integration during automatic construction of object models. We derive a minimum variance estimator (MVE) for computing the view transformation parameters accurately from range data of two views of a 3D object. The results of our experiments show that view transformation estimates obtained using MVE are significantly more accurate than those computed with an unweighted error criterion for registration. | Introduction
An important issue in the design of 3D object recognition systems is building models of physical
objects. Object models are extensively used for synthesizing and predicting object appearances from
desired viewpoints and also for recognizing them in many applications such as robot navigation and
industrial inspection. It becomes necessary on many occasions to construct models from multiple
measurements of 3D objects, especially when a precise geometric model such as a CAD description
is not available and cannot be easily obtained. This need is felt particularly with 3D free-form
objects, such as sculptures and human faces that may not possess simple analytical shapes for
representation. With growing interest in creating virtual museums and virtual reality functions
such as walk throughs, creating computer images corresponding to arbitrary views of 3D scenes
and objects remains a challenge.
Automatic construction of 3D object models typically involves three steps: (i) data acquisition
from multiple viewpoints, (ii) registration of views, and (iii) integration. Data acquisition involves
obtaining either intensity or depth data of multiple views of an object. Integration of multiple views
is dependent on the representation chosen for the model and requires knowledge of the transformation
relating the data obtained from different viewpoints. The intermediate step, registration,
is also known as the correspondence problem [1] and its goal is to find the transformations that
relate the views. Inaccurate registration leads to greater difficulty in seamlessly integrating the
data. It ultimately affects surface classification since surface patches from different views may be
erroneously merged, resulting in holes and discontinuities in the merged surface. For smooth merging
of data, accurate estimates of transformations are vital. In this paper we focus on the issue of
pairwise registration of noisy range images of an object obtained from multiple viewpoints using a
laser range scanner.
We derive a minimum variance estimator to compute the transformation parameters accurately
from range data. We investigate the effect of surface measurement noise on the registration of a pair
of views and propose a new method that improves upon the approach of Chen and Medioni [1]. We
have not seen any work that reports to date, establishing the dependencies between the orientation
of a surface, noise in the sensed surface data, and the accuracy of surface normal estimation and
how these dependencies can affect the estimation of 3D transformation parameters that relate a
pair of object views. We present a detailed analysis of this "orientation effect" with geometrical
arguments and experimental results.
Previous Work
There have been several research efforts directed at solving the registration problem. While the
first category of approaches relies on precisely calibrated data acquisition device to determine the
transformations that relate the views, the second kind involves techniques to estimate the transformations
from the data directly. The calibration-based techniques are inadequate for constructing
a complete description of complex shaped objects as views are restricted to rotations or to some
known viewpoints only and therefore, the object surface geometry cannot be exploited in the selection
of vantage views to obtain measurements.
With the second kind, inter-image correspondence has been established by matching the data
or the surface features derived from the data [2]. The accuracy of the feature detection method
employed determines the accuracy of feature correspondences. Potmesil [3] matched multiple range
views using a heuristic search in the view transformation space. Though quite general, this technique
involves searching a huge parameter space, and even with good heuristics, it may be computationally
very expensive. Chen and Medioni avoid the search by assuming an initial approximate
transformation for the registration, which is improved with an iterative algorithm [1] that minimizes
the distance from points in a view to tangential planes at corresponding points in other views. Besl
and McKay [4], Turk and Levoy [5], Zhang [6] employ variants of the iterated closest-point algo-
rithm. Blais and Levine [7] propose a reverse calibration of the range-finder to determine the point
correspondences between the views directly and use stochastic search to estimate the transforma-
tion. These approaches, however, do not take into account the presence of noise or inaccuracies in
the data and its effect on the estimated view-transformation. Our registration technique also uses a
distance minimization algorithm to register a pair of views, but we do not impose the requirement
that one surface has to be strictly a subset of the other. While our approach studies in detail
the effect of noise on the objective function [1] that is being minimized and proposes an improved
function to register a pair of views, Bergevin et al. [8, 9] propose to register all views simultaneously
to avoid error accumulation due to sequential registration. Haralick et al. [10] have also showed
that a weighted least-squares technique is robust under noisy conditions under various scenarios
such as 2D-2D, 3D-3D image registration.
3 A Non-Optimal Algorithm for Registration
Two views, P and Q of a surface are said to be registered when any pair of points, p and q from
the two views representing the same object surface point can be related to each other by a single
rigid 3D spatial transformation T , so that 8p 2
obtained by applying the transformation T to p, and T is expressed in homogeneous coordinates
as a function of three rotation angles, ff, fi and fl about the x, y and z axes respectively, and three
translation parameters, t x , t y and t z . The terms "view" and "image" are used interchangeably in
this paper. The approach of [1] is based on the assumption that an approximate transformation
between two views is already known and the goal is to refine the initial estimate to obtain more
accurate global registration. The following objective function was used to minimize the distances
from surface points in one view to another iteratively:
where T k is the 3D transformation applied to a control point at the kth iteration,
0g is the line normal to P at
is the intersection point of
surface Q with the transformed line T k l i , n k
is the normal to Q at q k
is the tangent plane to Q at q k
i and d s is the signed distance from a point to a plane as given
in Eq. (2). Note that '\Delta' stands for the scalar product and `\Theta' for the vector product. Figure 1
illustrates the distance measure d s between surfaces P and Q.
This registration algorithm thus finds a T that minimizes e k , using a least squares method
iteratively. The tangent plane S k
serves as a local linear approximation to the surface Q at a point.
The intersection point q k
i is an approximation to the actual corresponding point q i that is unknown
at each iteration k. An initial T 0 that approximately registers the two views is used to start the
iterative process. The signed distance d s , from a transformed point T to a tangential
(b)
(a)
Figure
1: Point-to-plane distance: (a) Surfaces P and Q before the transformation T k at iteration
k is applied; (b) distance from the point p i to the tangent plane S k
i of Q.
define the transformed point and the tangential
plane respectively. Note that (x; is the transpose of the vector (x; z). By minimizing
the distance from a point to a plane, only the direction in which the distance can be reduced is
constrained. The convergence of the process can be tested by verifying that the difference between
the errors e k at any two consecutive iterations is less than a pre-specified threshold. The line-surface
intersection given by the intersection of the normal line l i and Q is found using an iterative search
near the neighborhood of prospective intersection points.
4 Registration and Surface Error Modeling
Range data are often corrupted by measurement errors and sometimes lack of data. The errors in
surface measurements of an object include scanner errors, camera distortion, and spatial quanti-
zation, and the missing data can be due to self-occlusion or sensor shadows. Due to noise, it is
generally impossible to obtain a solution for a rigid transformation that fits two sets of noisy three-dimensional
points exactly. The least-squares solution in [1] is non-optimal as it does not handle the
errors in z measurements and it treats all surface measurements with different reliabilities equally.
Our objective is to derive a transformation that globally registers the noisy data in some optimal
sense. With range sensors that provide measurements in the form of a graph surface z = f(x; y), it
is assumed that the error is present along the z axis only, as the x and y measurements are usually
laid out in a grid. There are different uncertainties along different surface orientations and they
need to be handled appropriately during view registration. Furthermore, the measurement error is
not uniformly distributed over the entire image. The error may depend on the position of a point,
relative to the object surface. A measurement error model dealing with the sensor's viewpoint has
been previously proposed [11] for surface reconstruction where the emphasis was to recover straight
line segments from noisy single scan 3D surface profiles.
In this paper, we show that the noise in z values affects the estimation of the tangential plane
parameters differently depending on how the surface is oriented. Since the estimated tangential
plane parameters play a crucial role in determining the distance d s which is being minimized to
estimate T , we study the effect of noise on the estimation of the parameters of the plane fitted
and on the minimization of d s . The error in the iterative estimation of T is a combined result
of errors in each control point (x; z) T from view 1 and errors in fitting tangential planes at the
corresponding control points in view 2.
4.1 Fitting Planes to Surface Data with Noise
Surface normal to the plane
uncertainty region in Z
Horizontal plane
Surface normal to the plane
uncertainty region in Z
Effective uncertainty region affecting the normal
Inclined plane
(a) (b)
Figure
2: Effect of noise in z measurements on the fitted normal: (a) when the plane is horizontal;
(b) when it is inclined. The double-headed arrows indicate the uncertainty in depth measurements.
Figure
2 illustrates the effect of noise in the values of z on the estimated plane parameters. For
the horizontal plane shown in Figure 2(a), an error in z (the uncertainty region around z) directly
affects the estimated surface normal. In the case of an inclined plane, the effect of errors in z on
the surface normal to the plane is much less pronounced as shown in Figure 2(b). Here, even if
the error in z is large, only its projected error along the normal to the plane affects the normal
estimation. This projected error becomes smaller than the actual error in z as the normal becomes
more and more inclined with respect to the vertical axis. Therefore, our hypothesis is that as the
angle between the vertical (Z) axis and the normal to the plane increases, the difference between
the fitted plane parameters and the actual plane parameters should decrease.
We carried out simulations to study the actual effect of the noise in the z measurements on
estimating the plane parameters and to verify our hypothesis. The conventional method for fitting
planes to a set of 3D points uses a linear least squares algorithm. This linear regression method
implicitly assumes that two of the three coordinates are measured without errors. However, it
is possible that in general, surface points can have errors in all three coordinates, and surfaces
can be in any orientation. Hence, we used a classical eigenvector method (principal components
analysis) [12] that allows us to extract all linear dependencies.
Let the plane equation be Ax +By
set of surface measurements used in fitting a plane at a point on a surface. Let
and be the vector containing the plane parameters. We solve for the vector h such
that kAhk is minimized. The solution of h is a unit eigenvector of A T A associated with the smallest
eigenvalue. We renormalize h such that (A; B; C) T is the unit normal to the fitted plane and D is
the distance of the plane from the origin of the coordinate system. This planar fit minimizes the
sum of the squared perpendicular distances between the data points and the fitted plane and is
independent of the choice of the coordinate frame.
In our computer simulations, we used synthetic planar patches as test surfaces. The simulation
data consisted of surface measurements from planar surfaces at various orientations with respect to
the vertical axis. Independent and identically distributed (i.i.d.) Gaussian and uniform noise with
zero mean and different variances were added to the z values of the synthetic planar data. The
standard deviation of the noise used was in the range 0.001-0.005 in. as this realistically models
the error in z introduced by a Technical Arts 100X range scanner [13] that was employed to obtain
the range data for our experiments. The planar parameters were estimated using the eigenvector
method at different surface points with a neighborhood of size 5 \Theta 5. The error E fit in fitting the
plane was defined as the norm of the difference between the actual normal to the plane and the
normal of the fitted plane estimated with the eigenvector method. Figure 3(a) shows the plot of
versus the orientation (with respect to the vertical axis) of the normal to the simulated plane
at different noise variances. The plot shows E fit averaged over 1,000 trials at each orientation.
It can be seen from Figure 3(a) that in accordance with our hypothesis, the error in fitting a
plane decreases with an increase in the angle between the vertical axis and the normal to the plane.
When the plane is nearly horizontal (i.e., the angle is small), the error in z entirely contributes to
fit as indicated by Figure 2(a). The error plots for varying amounts of variance were observed
Squared
difference
between
the
actual
and
estimated
normals
Angle between the normal to the plane and vertical axis
"std_dev_of_noise=0.0"
"std_dev_of_noise=0.001"
"std_dev_of_noise=0.002"
"std_dev_of_noise=0.003"
"std_dev_of_noise=0.004"
Squared
Difference
betweeb
the
estimated
and
actual
normals
Angle between the normal to the plane and vertical axis
"std_dev_of_noise=0.0"
"std_dev_of_noise=0.001"
"std_dev_of_noise=0.002"
"std_dev_of_noise=0.003"
"std_dev_of_noise=0.004"
"std_dev_of_noise=0.005"
(a) (b)
Figure
3: Effect of i.i.d. Gaussian noise in z measurements on the plane: (a) estimated using
eigenvector approach; (b) estimated using linear regression.
to have the same behavior with orientation as shown in Figure 3(a). Similar curves were obtained
with a uniform noise model also [14]. These simulations confirm our hypothesis about the effect of
noise in z on the fitted plane parameters as the surface orientation changes.
We repeated the simulations using the linear regression method to fit planes to surface data.
We refer the reader to [14] for details. Figure 3(b) shows the error E fit between the fitted and
actual normals to the plane at various surface orientations when i.i.d. Gaussian noise was added
to the z values. Our hypothesis is well supported by this error plot also.
4.2 Proposed Optimal Registration Algorithm
Since the estimated tangential plane parameters are affected by the noise in z measurements, any
inaccuracies in the estimates in turn, influence the accuracy of the estimates of d s , thus affecting
the error function being minimized during the registration. Further, errors in z themselves affect d s
estimates (see Eq. 2). Therefore, we characterize the error in the estimates of d s by modeling the
uncertainties associated with them using weights. Our approach is inspired by the Gauss-Markov
theorem [15] which states that an unbiased linear minimum variance estimator of a parameter
vector m when is the one that minimizes (y
y
is a random noise vector with zero mean and covariance matrix \Gamma y . Based on this theorem, we
formulate an optimal error function for registration of two object views as
ds
ds is the estimated variance of the distance d s . When the reliability of a z value is low,
the variance of the distance oe 2
ds is large and the contribution of d s to the error function is small,
and when the reliability of the z measurement is high, oe 2
ds is small, and the contribution of d s is
large; d s with a minimum variance affects the error function more. One of the advantages of this
minimum variance criterion is that we do not need the exact noise distribution. What we require
only is that the noise distribution be well-behaved and have short tails. In our simulations, we
employ both Gaussian and uniform noise distributions to illustrate the effectiveness of our method.
We need to know only the second-order statistics of the noise distribution, which in practice can
often be estimated.
4.3 Estimation of the Variance oeds
We need to estimate oe 2
ds to model the reliability of the computed d s at each control point, which
can then be used in our optimal error function in Eq. (4). Let the set of all the surface points be
denoted by P and the errors in the measurements of these points be denoted by a random vector ffl.
The error e ds in the distance computed is due to the error in the estimated plane parameters and
the error in the z measurement, and therefore is a function of P and ffl. Since we do not know ffl, if
we can estimate the standard deviation of e ds (with ffl as a random vector) from the noise-corrupted
surface measurements P , we can use it in Eq. (4).
4.3.1 Estimation of oe 2
ds Based on Perturbation Analysis
Perturbation analysis is a general method for analyzing the effect of noise in data on the eigenvectors
obtained from the data. It is general in the sense that errors in x, y and z can all be handled. This
analysis is also related to the general eigenvector method that we studied for plane estimation. The
analysis for estimating oe 2
ds is simpler if we use linear regression method to do plane fitting [14].
Since we fit a plane with the eigenvector method that uses the symmetric matrix
computed from the (x; measurements in the neighborhood of a surface point, we need to
analyze how a small perturbation in the matrix C caused by the noise in the measurements can
affect the eigenvectors. Recall that these eigenvectors determine the plane parameters (A; B; C; D) T
which in turn determine the distance d s . We assume that the noise in the measurements has zero
mean and some variance and that the latter can be estimated empirically. The correlation in noise
at different points is assumed to be negligible. Estimation of correlation in noise is very difficult
but even if we estimate it, its impact may turn out to be insignificant. We estimate the standard
deviation of errors in the plane parameters and in d s on the basis of the first-order perturbations,
i.e., we estimate the "linear terms" of the errors.
Before we proceed, we discuss some of the notational conventions that are used: I m is an m \Theta m
identity matrix; diag(a; b) is a 2 \Theta 2 diagonal matrix with a and b as its diagonal elements. Given
a noise-free matrix A, its noise-matrix is denoted by \Delta A and the noise-corrupted version of A is
denoted by . The vector ffi is used to indicate the noise vector, We
use \Gamma with a corresponding subscript to specify the covariance matrix of the noise vector/matrix.
For a given matrix vector A can be associated with it as
A thus consists of the column vectors of A that are lined up together.
As proved in [16], if C is a symmetrical matrix formed from the measurements and h
is the parameter vector (A; B; C; D) T given by the eigenvector of C associated with the smallest
eigenvalue, say - then the first-order perturbation in the parameter vector h is given by
where
and H is an orthonormal matrix such that
A is a 4 \Theta 4 noise or perturbation matrix associated with A T A. If \Delta A T A can be estimated, then
the perturbation ffi h in h can be estimated by a first-order approximation as in Eq. (5).
We estimate \Delta A T A from the perturbation in the surface measurements. We assume for the sake
of simplicity of analysis that only the z component of a surface measurement
errors, with this general model. This analysis is easily and directly extended to include errors in x
and y if their noise variances are known.
Let z i have additive errors ffi z i
We then get
If the errors in z at different points on the surface have the same variance oe 2 , we get the covariance
matrix
where
Now, consider the error in h. As stated before, we have
In the above equation, we have rewritten the matrix \Delta A T A as a vector ffi A T A and moved the perturbation
to the extreme right of the expression. Then the perturbation of the eigenvector is the
linear transformation (by matrix G h ) of the perturbation vector ffi A T A . Since we have \Gamma A T
A ),
we need to relate ffi A T A to ffi A T . Using a first-order approximation [16], we get
A A: (12)
Letting A
where G A T A is easily determined from the equation G A T are
matrices with 4 \Theta n submatrices F ij and G ij a ji I 4 , and G ij is a 4 \Theta 4 matrix
with the ith column being the column vector A j and all other columns being zero. Thus, we get
Then the covariance matrix of h is given by
The distance d s is affected by the errors in the estimation of the plane parameters,
and the z measurement in Therefore, the error variance in d s is
ds
@ds
@A
@ds
@ds
@ds
@D
@ds
@z
\Theta [\Gamma hz
@ds
@A
@ds
@ds
@ds
@D
@ds
The covariance matrix \Gamma hz is given by
Once the variance of d s , oe 2
ds is estimated, we employ it in our optimal error function:
ds
4.3.2 Simulation Results
Figure
4(a) shows the plot of the actual standard deviation of the distance d s versus the orientation
of the plane with respect to the vertical axis. Note that the mean of d s is zero when the
Actual
standard
deviation
of
distance
d_s
Angle between the normal to the plane and vertical axis
"std_dev_of_noise=0.0"
"std_dev_of_noise=0.001"
"std_dev_of_noise=0.002"
"std_dev_of_noise=0.003"
"std_dev_of_noise=0.004"
Actual
standard
deviation
of
distance
d_s
Angle between the normal to the plane and vertical axis
"std_dev_of_noise=0.0"
"std_dev_of_noise=0.001"
"std_dev_of_noise=0.002"
"std_dev_of_noise=0.003"
"std_dev_of_noise=0.004"
Estimated
standard
deviation
of
distance
d_s
Angle between the normal to the plane and vertical axis
"std_dev_of_noise=0.0"
"std_dev_of_noise=0.001"
"std_dev_of_noise=0.002"
"std_dev_of_noise=0.003"
"std_dev_of_noise=0.004"
"std_dev_of_noise=0.005"
(a) (b)
Figure
4: Standard deviation of d s versus the planar orientation with a Gaussian noise model:
(a) actual oe ds ; (b) estimated oe ds using the perturbation analysis.
points are in complete registration and when there is no noise. We generated two views of synthetic
planar surfaces with the view transformation between them being an identity transformation. We
experimented with the planar patches at various orientations. We added uncorrelated Gaussian
noise independently to the two views. Then we estimated the distance d s at different control points
using Eq. (2) and computed its standard deviation. The plot shows the values averaged over 1,000
trials. As indicated by our hypothesis, the actual standard deviation of d s decreases as the planar
orientation goes from being horizontal to vertical. As the variance of the added Gaussian noise
to the z measurements increases, oe ds also increases. Similar results were obtained when we added
uniform noise to the data [14].
We compared the actual variance with the estimated variance of the distance (Eq. (16)) in
order to verify whether our modeling of errors in z values at various surface orientations is correct.
We computed the estimated variance of the distance d s using our error model using Eq. (16)
with the same experimental setup as described above. Figure 4(b) illustrates the behavior of
the estimated standard deviation of d s as the inclination of the plane (the surface orientation)
changes. A comparison of Figures 4(a) and 4(b) shows that both the actual and the estimated
standard deviation plots have similar behavior with varying planar orientation and their values are
proportional to the amount of noise added. This proves the correctness of our error model of z and
its effect on the distance d s . Simulation results, when we repeated the experiments to compute both
the actual and the estimated oe ds using the planar parameters estimated with the linear regression
method, were similar to those shown in Figure 4. This also demonstrates the important fact that
the method used for planar fitting does not bias our results.
5 View Registration Experiments
In this section we demonstrate the improvements in the estimation of view transformation parameters
on real range images using our MVE. We will henceforth refer to Chen and Medioni's
technique [1] as C-M method. We obtained range images of complex objects using a Technical Arts
laser range scanner. We performed uniform subsampling of the depth data to locate the control
points in view 1 that were to be used in the registration. From these subsampled points we chose a
fixed number of points that were present on smooth surface patches. The local smoothness of the
surface was verified using the value of residual standard deviation resulting from the least-squares
fitting of a plane in the neighborhood of a point. A good initial guess for the view transformation
was determined automatically when the range images contained the entire object surface and the
rotations of the object in the views were primarily in the plane. Our method is based on estimating
an approximate rotation and translation by aligning the major (principal) axes of the object
views [14].
Figures
5(a) and 5(c) depict the two major axes of the objects. We used this estimated
transformation as an initial guess for the iterative procedure in our experiments, so that no prior
knowledge of the sensor placement was needed. Experimental results show the effectiveness of our
method in refining such rough estimates. The same initial guess was used with the C-M method and
the proposed MVE. We employed Newton's method for minimizing the error function iteratively.
In order to measure the error in the estimated rotation parameters, we utilize an error measure
that does not depend on the actual rotation parameters. The relative error of rotation matrix R, ER
is defined [16] to be
R is an estimate of R. Since R, the geometric
sense of ER is the square root of the mean squared distance between the three unit vectors of the
rotated orthonormal frames. Since the frames are orthonormal,
3.
The error in translation, E t is defined as the square root of the sum of the squared differences
between the estimated and actual t x , t y and t z values.
5.1 Results
Figure
5 shows the range data of a cobra head and a Big-Y pipe. The figure renders depth as pseudo
intensity and points almost vertically oriented are shown darker. View 2 of the cobra head was
obtained by rotating the surface by 5 ffi about the X axis and 10 ffi about the Z axis. Table 1 shows
(a) (b) (c) (d)
Figure
5: Range images and their principal axes: (a) View 1 of a Cobra head; (b) View 2 of the
cobra head; (c) Big-Y pipe data generated from its CAD model; (d) View 2 of Big-Y pipe.
the values of ER and E t for the cobra views estimated using only as few as 25 control points. It can
be seen that the transformation parameters obtained with the MVE are closer to the ground truth
than those estimated using the unweighted objective function of C-M method. Even when more
control points (about 156) were used, the estimates using our method were closer to the ground
truth than those obtained with the C-M method [14].
We also show the performance of our method when the two viewpoints are substantially different
and the depth values are very noisy. Figure 5 shows two views of the Big-Y pipe generated from
the CAD model. The second view was generated by rotating the object about the Z axis by 45 ffi .
We also added Gaussian noise with mean zero and standard deviation of 0:5 mm to the z values of
the surfaces in view 2. Table 2 shows ER and E t computed with 154 control points. It can be seen
from these tables that the transformation matrix, especially the rotation matrix obtained with the
MVE is closer to the ground truth than that obtained using C-M method. The errors in translation
components of the final transformation estimates are mainly due to the approximate initial guess.
Parameters Actual C-M MVE
value method
Parameters Actual C-M MVE
value method
Table
1: Estimated transformation for Table 2: Estimated transformation for
the cobra views. the Big-Y views.
Our method refined these initial values to provide a final solution very close to the actual values.
Our method also handled large transformations between views robustly. With experiments on
range images of facial masks, we found that even when the depth data were quite noisy owing to
the roughness of the surface texture of the objects and also due to self-occlusion, more accurate
estimates of the transformation were obtained with the MVE. When the overlapping object surface
between the views is quite small, the number of control points available for registration tends to
be small and in such situations also the MVE has been found to have substantial improvement in
the accuracy of the transformation estimate. Note that measurement errors are random and we
minimize the expected error in the estimated solution. However, our method does not guarantee
that every component in the solution will have a smaller error in every single case.
We also used the MVE for refining the pose estimated using cosmos-based recognition system
for free-form objects [14]. The rotational component of the transformation of a test view
of Vase2 (View 1 shown in Figure 6(a)) relative to its best-matched model view (View 2 shown
in
Figure
was estimated using surface normals of corresponding surface patch-groups determined
by the recognition system. A total of 10 pairs of corresponding surface patch-groups was
used to estimate the average rotation axis and the angle of rotation. These rotation parameters
were used to compute the 3 \Theta 3
rotation matrix which was then used as an initial guess to register the model view (View 2) with
the test view (View 1) of Vase2 using the MVE. We note here that the computational procedure
for MVE was augmented using a verification mechanism for checking the validity of the control
points during its implementation [17]. We derived the results presented in this section using this
augmented procedure. Figures 6(c)-(g) show the iterative registration of the model view with the
scene view. It can be seen that the views are in complete registration with one another at the end
of seven iterations.
Figure
7 shows the registration of a model view with a scene view of Phone through several
iterations of the algorithm. The registration scheme converged with the lowest error value at the
(a) (b)
(c) (d) (e) (f) (g)
Figure
Pose estimation: (a) View1 of a vase; (b) view2; (c)-(f) model view registered with the
test view of Vase2 at the end of the first, third, fourth and fifth iterations; (g) registered views at
the convergence of the algorithm.
(a) (b)
(c) (d) (e) (f) (g)
Figure
7: Registration of views of Phone: (a) View 1; (b) view 2; (c)-(f) model view registered with
the test view of Phone at the end of first, second, third and fourth iterations; (g) registered views
at the convergence of the algorithm.
sixth iteration. It can be seen that even with a coarse initial estimate of the rotation, the registration
technique can align the two views successfully within a few iterations. Given a coarse correct initial
guess, registration, on the average, takes about seconds to register two range images whose sizes
are 640 \Theta 480 on a SPARCstation 10 with 32MB RAM.
5.2 Discussion
In general, all the orientation parameters of an object will be improved by the proposed MVE
method if the object surface covers a wide variety of orientations which is true with many natural
objects. This is because each locally flat surface patch constrains the global orientation estimate
of the object via its surface normal direction. For example, if the object is a flat surface, then only
the global orientation component that corresponds to the surface normal can be improved, but not
the other two components that are orthogonal to it. For the same reason, the surface normal of
a cylindrical surface (without end surfaces) covers only a great circle of the Gaussian sphere, and
thus, only two components of its global orientation can be improved. The more surface orientations
that an object covers, the more complete the improvement in its global orientation can be, by the
proposed MVE method. An analysis of the performance of the MVE and unweighted registration
algorithms with surfaces of various geometries can be found in [14].
When more than two views have to be registered, our algorithm for registering a pair of object
views can be used either sequentially (with the risk of error accumulation) or in parallel, e.g., with
the star-network scheme [9]. Note however, that we have not extended our weighted approach
to the problem of computing the transformation between n views simultaneously. When there
is a significant change in the object depth the errors in z at different points on the surface may
no longer have the same variance; the variance typically increases with greater depth. In such
situations our perturbation analysis still holds, except for the covariance matrix \Gamma A T in Eq. 9. The
diagonal elements of this matrix will no longer be identical as we assumed. Each element, which is a
summary of the noise variance at the corresponding point in the image, must reflect the combined
effect of variation due to depth, measurement unreliability due to surface inclination, etc., and
therefore a suitable noise model must be assumed or experimentally created.
6
Summary
Noise in surface data is a serious problem in registering object views. The transformation that
relates two views should be estimated robustly in the presence of errors in surface measurements
for seamless view integration. We established the dependency between the surface orientation and
the accuracy of surface normal estimation in the presence of error in range data, and its effect on
the estimation of transformation parameters with geometrical analysis and experimental results.
We proposed a new error model to handle uncertainties in z measurements at different orientations
of the surface being registered. We presented a first-order perturbation analysis of the estimation
of planar parameters from surface data. We derived the variance of the point-to-plane distance to
be minimized to update the view transformation during registration. We employed this variance
as a measure of the uncertainty in the distance resulting from noise in the z value and proposed
a minimum variance estimator to estimate transformation parameters reliably. The results of our
experiments on real range images have shown that the estimates obtained using our MVE generally
are significantly more reliable than those computed with an unweighted distance criterion.
Acknowledgments
This work was supported by a grant from Northrop Corporation. We thank the reviewers for their
helpful suggestions for improvement.
--R
"Object modelling by registration of multiple range images,"
"Integrating information from multiple views,"
"Generating models for solid objects by matching 3D surface segments,"
"A method for registration of 3-D shapes,"
"Zippered polygon meshes from range images,"
"Iterative point matching for registration of free-form curves and surfaces,"
"Registering multiview range data to create 3D computer graphics,"
"Registering range views of multipart objects,"
"Towards a general multi-view registration technique,"
"Pose estimation from corresponding point data,"
"Scene reconstruction and description: Geometric primitive extraction from multiple view scattered data,"
"Surface classification: Hypothesis testing and parameter estima- tion,"
Experiments in 3D CAD-based Inpection using Range Images
cosmos: A Framework for Representation and Recognition of 3D Free-Form Objects
"Motion and structure estimation from stereo image sequences,"
"Motion and structure from two perspective views: Algorithms, error analysis, and error estimation,"
"From images to models: Automatic 3D object model construction from multiple views,"
--TR
--CTR
Alberto Borghese , Giancarlo Ferrigno , Guido Baroni , Antonio Pedotti , Stefano Ferrari , Riccardo Savar, Autoscan: A Flexible and Portable 3D Scanner, IEEE Computer Graphics and Applications, v.18 n.3, p.38-41, May 1998
Byung-Uk Lee , Chul-Min Kim , Rae-Hong Park, An Orientation Reliability Matrix for the Iterative Closest Point Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.10, p.1205-1208, October 2000
Gregory C. Sharp , Sang W. Lee , David K. Wehe, ICP Registration Using Invariant Features, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.90-102, January 2002
Xianfeng Wu , Dehua Li, Range image registration by neural network, Machine Graphics & Vision International Journal, v.12 n.2, p.257-266, February
Zonghua Zhang , Xiang Peng , David Zhang, Transformation image into graphics, Integrated image and graphics technologies, Kluwer Academic Publishers, Norwell, MA, 2004
Ross T. Whitaker , Jens Gregor, A Maximum-Likelihood Surface Estimator for Dense Range Data, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.10, p.1372-1387, October 2002
Luca Lucchese , Gianfranco Doretto , Guido Maria Cortelazzo, A Frequency Domain Technique for Range Data Registration, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.11, p.1468-1484, November 2002 | 3D free-form objects;automatic object modeling;view transformation estimation;image registration;range data;view integration |
264997 | Datapath scheduling with multiple supply voltages and level converters. | We present an algorithm called MOVER (Multiple Operating Voltage Energy Reduction) to minimize datapath energy dissipation through use of multiple supply voltages. In a single voltage design, the critical path length, clock period, and number of control steps limit minimization of voltage and power. Multiple supply voltages permit localized voltage reductions to take up remaining schedule slack. MOVER initially finds one minimum voltage for an entire datapath. It then determines a second voltage for operations where there is still schedule slack. New voltages con be introduced and minimized until no schedule slack remains. MOVER was exercised for a variety of DSP datapath examples. Energy savings ranged from 0% to 50% when comparing dual to single voltage results. The benefit of going from two to three voltages never exceeded 15%. Power supply costs are not reflected in these savings, but a simple analysis shows that energy savings can be achieved even with relatively inefficient DC-DC converters. Datapath resource requirements were found to vary greatly with respect to number of supplies. Area penalties ranged from 0% to 170%. Implications of multiple voltage design for IC layout and power supply requirements are discussed. | INTRODUCTION
A great deal of current research is motivated by the need for decreased power dissipation
while satisfying requirements for increased computing capacity. In portable
An earlier abbreviated version of this work was reported in the Proceedings of the 1997 IEEE
International Symposium on Circuits and Systems, Hong Kong.
This research was supported in part by ARPA (F33615-95-C-1625), NSF CAREER award
(9501869-MIP), ASSERT program (DAAH04-96-1-0222), IBM, AT&T/Lucent, and Rockwell.
Authors' address: School of Electrical and Computer Engineering Purdue University, West
Lafayette, Indiana, 47907-1285, USA
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org.
c
1997 by the Association for Computing Machinery, Inc.
systems, battery life is a primary constraint on power. However, even in non-portable
systems such as scientific workstations, power is still a serious constraint
due to limits on heat dissipation.
One design technique that promises substantial power reduction is voltage scaling.
The term "voltage scaling" refers to the trade-off of supply voltage against circuit
area and other CMOS device parameters to achieve reduced power dissipation while
maintaining circuit performance. The dominant source of power dissipation in a
conventional CMOS circuit is due to the charging and and discharging of circuit
capacitances during switching. For static CMOS, the switching power is proportional
to
dd [Rabaey 1996]. This relationship provides a strong incentive to lower
supply voltage, especially since changes to any other design parameter can only
achieve linear savings with respect to the parameter change. The penalty of voltage
reduction is a loss of circuit performance. The propagation delay of CMOS is
approximately proportional to Vdd
[Rabaey 1996], where VT is the transistor
threshold voltage.
A variety of techniques are applied to compensate for the loss of performance
with respect to V dd including reduction of threshold voltages, increasing transistor
widths, optimizing the device technology for a lower supply voltage, and shortening
critical paths in the data path by means of parallel architectures and pipelining.
Data path designs can benefit from voltage scaling even without changes in device
technologies. Algorithm transformations and scheduling techniques can be used to
increase the latency available for some or all data path operations. The increased
latency allows an operation to execute at a lower supply voltage without violating
schedule constraints. "Architecture-Driven Voltage Scaling" is a name applied to
this approach.
A number of researchers have developed systems or proposed methods that incorporate
architecture driven voltage scaling [Chandrakasan et al. 1995; Raghunathan
and Jha 1994; Raghunathan and Jha 1995; Goodby et al. 1994; Kumar et al. 1995;
SanMartin and Knight 1995; Raje and Sarrafzadeh 1995; Gebotys 1995]. HYPER-
LP [Chandrakasan et al. 1995] is a system that applies transformations to the data
flow graph of an algorithm to optimize it for low power. Other systems accept the
algorithm as given and apply a variety of techniques during scheduling, module
selection, resource binding, etc. to minimize power dissipation. All of the systems
mentioned above try to exploit parallelism in the algorithm to shorten critical paths
so that reduced supply voltages can be used. Most systems [Chandrakasan et al.
1995; Raghunathan and Jha 1994; Raghunathan and Jha 1995; Goodby et al. 1994;
Kumar et al. 1995; Gebotys 1995] also minimize switched capacitance in the data
path.
Most voltage scaling approaches require that the IC operate at a single supply
voltage. Although substantial energy savings can be realized with a single minimum
supply voltage, one cannot always take full advantage of available schedule slack
to reduce the voltage. Non-uniform path lengths, a fixed clock period, and a fixed
number of control steps can all result in schedule slack that is not fully exploited.
Figure
provides examples of each type of bottleneck. When there are non-uniform
path lengths, the critical (longest) path determines the minimum supply voltage
even though the shorter path could execute at a still lower voltage and meet timing
constraints. When the clock period is a bottleneck, some operations only use part
of a clock period. The slack within these clock periods goes to waste. Additional
voltages would permit such operations to use the entire clock period. Finally, a fixed
number of control steps (resulting from a fixed clock period and latency constraint)
may lead to unused clock cycles if the sequence of operations does not match the
number of available clock cycles.
A3
Unused
Slack
Unused
Slack
A3
Unused
Slack
Non-Uniform
Path Length
Period
Number of
Control Steps
A3
A4Fig. 1. Examples of scheduling bottlenecks
Literature on multiple voltage synthesis is limited, but this is changing. Publications
that address the topic include [Raje and Sarrafzadeh 1995], [Gebotys
1995], and [Johnson and Roy 1996]. Raje and Sarrafzadeh [Raje and Sarrafzadeh
1995] schedule the data path and assign voltages to data path operators so as to
minimize power given a predetermined set of supply voltages. Logic level conversions
are not explicitly modeled in their formulation. Gebotys [Gebotys 1995] used
an integer programming approach to scheduling and partitioning a VLSI system
across multiple chips operating at different supply voltages. Johnson [Johnson and
Roy 1996] used an integer program to choose voltages from a list of candidates,
schedule datapath operations, model logic level conversions, and assign voltages
to each operation. Chang and Pedram [Chang and Pedram 1996] address nearly
the same problem, applying a dynamic programming approach to optimize non-pipelined
datapaths and a modified list scheduler to handle functionally pipelined
datapaths.
2. DATAPATH SPECIFICATIONS
A datapath is specified in the form of a data flow graph (DFG) where each vertex
represents an operation and each arc represents a data flow or latency constraint.
This DFG representation is similar to the "sequencing graph" representation described
by DeMicheli [DeMicheli 1994] except that hierarchical and conditional
graph entities are not supported.
The DFG is a directed acyclic graph, G(V; E), with vertex set V and edge set
Each vertex corresponds one-to-one with an operator in the data path. Each
edge corresponds one-to-one with a dependency between two operators: a data
flow, a latency constraint, or both. Associated with each vertex is an attribute
that specifies the operator type such as adder, multiplier, or null operation (NO-
OP). Associated with each edge is an attribute that indicates a latency constraint
between the start times of the source and destination operations. A positive value
indicates a minimum delay between operation start times. The magnitude of a
negative value specifies a maximum allowable delay from the destination to the
source. Figure 2 provides a simple example of a datapath specification and defines
elements of the DFG notation.
Maximum Latency
of 1 Sample Period
Data Flow
[Min. Latency Clock Cycles]
Adder
2's Complement
Multiplier
source
sink
Fig. 2. Sample datapath specification and key to notation
Two types of NO-OP's are used which we will refer to as ``transitive'' and "non-
NO-OP's. The term ``transitive'' is used to indicate that a NO-OP
propagates signals without any delay or cost. Neither type of NO-OP introduces
delay or power dissipation. Both serve as vertices in the DFG to which latency
constraints can be attached. The transitive NO-OP is treated as if signals and
their logic levels are propagated through the NO-OP.
3. MOVER SCHEDULING ALGORITHM
MOVER will generate a schedule, select a user specified number of supply voltage
levels, and assign voltages to each operation. MOVER uses an ILP method to evaluate
the feasibility of candidate supply voltage selections, to partition operations
among different power supplies, and to produce a minimum area schedule under
latency constraints once voltages have been selected. The algorithm proceeds in
several phases. First, MOVER determines maximum and minimum bounds on the
time window in which each operation must execute. It then searches for a minimum
single supply voltage. Next, MOVER partitions datapath operations into two
groups: those which will be assigned to a higher supply voltage and those which
will be assigned to a lower supply voltage. The high voltage group is initially fixed
to a voltage somewhat above the minimum single voltage. MOVER then searches
for a minimum voltage for the lower group. The voltage of the lower group is fixed.
A new minimum voltage for the upper group is sought. To find a three supply
schedule, partition the lower voltage group and search for new minimum voltages
for bottom, middle, and upper groups.
3.1 ILP Formulation
At the core of MOVER is an integer linear program (ILP) that is used repeatedly
to evaluate possible supply voltages, partition operations between different power
supplies, and produce a schedule that minimizes resource usage. In each case,
MOVER analyzes the DFG and generates a collection of linear inequalities that
represent precedence constraints, timing constraints, and resource constraints for
the datapath to be scheduled. A weighted sum of the energy dissipation for each
operation is used as the optimization objective when partitioning operations or
evaluating the feasibility of a supply voltage. A weighted sum of resource usage
serves as the optimization objective when minimizing resources. The inequalities
and the objective function are packed into a matrix of coefficients that are fed into
an ILP program solver (CPLEX). MOVER interprets the results from CPLEX and
annotates the DFG to indicate schedule times and voltage assignments.
The architectural model assumed by MOVER is depicted in Figure 3. All operator
outputs have registers. Each operator output feeds only one register. That
register operates at the same voltage as the operator supplying its input. All level
conversions, when needed, are performed at operator inputs.
operator
operator
operator register level
converter
Fig. 3. MOVER architectural model
MOVER's ILP formulation works on a DFG where voltage assignments for some
operations may already be fixed. For operations not already fixed to a voltage, the
formulation chooses between two closely spaced voltages so as to minimize energy.
The voltages are chosen to be close enough together that level conversions from
one to the other can be ignored. Consequently, level conversions only need to be
accounted between operations fixed to different voltages and on interfaces between
fixed and unfixed operations.
3.2 ILP Decision Variables
Three categories of decision variables are used in the MOVER ILP formulation.
One set of variables of the form x i;l;s indicates the start time and supply voltage
assignment for each operator that has not already been fixed to a particular supply
voltage. x begins execution on clock cycle l
using supply voltage s. Under any other condition, x i;l;s will equal zero. The
supply voltage selection is limited to two values where selects the lower and
selects the higher candidate voltage. Another set of variables, x i;l , indicates
the start time of operations for which the supply voltage has been fixed. x
indicates that operation i starts at clock cycle l. Under any other condition, x i;l will
equal zero. The last group of variables, a m;s , indicates the allocation of operator
resources to each possible supply voltage. a m;s will be greater than or equal to the
number of resources of type m that are allocated to supply voltage s. In this case,
s can be an integer in the range (1; # fixed supplies corresponds to
the new candidate supply voltages. s ? 2 corresponds to supply voltages that have
already been fixed.
3.3 Objective Functions
The objective function (equation 1) estimates the energy required for one execution
of the data path as a function of the voltage assigned to each operation. Consider
the energy expression split into two parts. The first nested summation counts the
total energy contribution associated with operations not already fixed to a supply
voltage. The second nested summation counts the total energy contribution of
operations that are already fixed to a particular supply voltage.
For each operation j that has not been fixed to a supply voltage (e.g.,
the first nested summation accumulates the energy of operation j (onrg(j; s
the register at the output of operation j (rnrg(s j ; fanout j )), and any level conversions
required at the input to is the index of the supply
voltage assigned to operation j. fanout j is the fanout capacitive load on operation
j. c reg is the input capacitance of a register to which the operation output
is connected. The decision variables x j;l;s are used to select which lookup table
values for operator, register, and level conversion energy are added into the total
energy. We must sum over both candidate supply voltages s j and all clock cycles l
in the possible execution time window R j of operation j. E conv is the set of DFG
arcs that may require a level conversion, depending on voltage assignments. V oper
is the set of DFG vertices that are not NO-OPs. V fix is the set of DFG vertices
(operations) that have been fixed to a particular voltage. V free is the set of vertices
that have not previously been fixed to a voltage.
For each operation j that has been fixed to a supply voltage, we again accumulate
the energy of each operation, register, and level conversion. The only difference
from the expression for free operations is that now all voltages in the expression
are constants determined prior to solving the ILP formulation. Consequently, the
index s j can be removed from the summation and the decision variable x.
j2Vfree "Voper
x j;l;s \Theta
j2Vfix "Voper
x j;l \Theta
conversion energy at the input
of free and fixed operations respectively. c in i
is the input capacitance of operation
i.
ij(i;j)2Econv and i2Vfix
ij(i;j)2Econv and i2Vfree
cnrg fix
ij(i;j)2Econv and i2Vfix
ij(i;j)2Econv and i2Vfree
Equation 4 is the objective function used when minimizing resource usage. Here,
a m;s indicates the minimum number of operators of type m with supply voltage s
needed to implement a datapath. Each operation of type m is considered to have an
area of aream . M oper represents the set of all operation types excluding NO-OPs.
The summation accumulates an estimate of the total circuit resources required to
implement a datapath.
m2Moper
aream \Theta a m;s (4)
3.4 ILP Constraint Inequalities
Equation 5 guarantees that only one start time l is assigned to each operation i
for which the supply voltage is already fixed. Equation 6 guarantees that only one
start time l and supply voltage s can be assigned to each operation i that does not
have a supply voltage assignment.
Equation 7 guarantees that the voltage of a transitive NO-OP j matches the
voltage of all operations supplying an input to the transitive NO-OP. V trnoop is the
set of vertices in the DFG corresponding to transient NO-OP's. E is the set of all
arcs in the DFG.
l
Equation 8 enforces precedence constraints specified in the DFG. Simplified versions
of the constraint can be used if the source or destination operations are fixed to
a voltage. This constraint is an adaptation of the structured precedence constraint
shown by Gebotys [Gebotys 1992] to produce facets of the scheduling polytope.
Each arc (i; j) with a latency lat i;j - 0 specifies a minimum latency from the start
of operation i to the start of operation j. Equation 8 defines the set of precedence
constraint inequalities corresponding to DFG arcs where the source and destination
operations are both free (not fixed to a voltage). Simplified versions of this
constraint are used when source or destination operations are fixed to a voltage.X
del i;s i +l
l 1 =0
l 2 =l
Equation 9 enforces maximum latency constraints specified in the DFG. Each
arc (i; j) with a latency lat i;j ! 0 specifies a maximum delay from operation j to
operation i. Equation 9 defines the set of maximum latency constraint inequalities
corresponding to arcs where the source and destination operations are both
fixed to a voltage). Simplified versions of this constraint are used when
source or destination operations are fixed to a voltage. The remaining equations
are simplifications of equation 9.X
l 2 =l\Gammalat i;j +1
Equations 10 and 11 ensure that resource usage during each time step does not
exceed the resource allocation given by a m;s . The expressions on the left computes
the number operations of type m with supply voltage s that are executing concurrently
during clock cycle l. a m;s indicates the number of type m resources that have
been allocated to supply voltage s. Equation 10 enforces the resource constraint
enforces the constraint for fixed operations. Free
operations are allowed to take on one of two candidate voltages. These resource
constraints can be easily modified to support functional pipelining with a sample
period of l samp by combining the left hand sides for l, l
l
l 1 =l\Gammadel i;s i +1
- a m;s i
l
l 1 =l\Gammadel i;s i +1
- a m;s i (11)
Table
I. Voltage search algorithm
1. Choose starting voltages V2 and
2. Create matrix of ILP constraint inequalities.
3. Obtain minimum energy solution to inequalities.
The solution will provide a schedule, a mapping of V1 or V2 to each
operator, an energy estimate, and an area estimate for the datapath.
4a. If a solution was found, then
If most operations were assigned to V1 , then
Choose new candidate voltages midway between V1 and V lo .
Go to step 2.
else
There must be little or no benefit to assigning operations to V1
Fix all operations to V2
4b. else (if the problem was infeasible)
Choose new candidate voltages midway between V2 and Vhi .
Go to step 2.
Equation 12 enforces the user specified resource constraints. maxres(m) represents
the total number of resources of type m (regardless of voltage) that can be
permitted. The left side expression accumulates the number of resources of type m
that have been allocated to all supply voltages. The total is not allowed to exceed
the user specified number of resources.
a m;s - maxres(m) 8m 2 M oper (12)
3.5 Voltage search
MOVER searches a continuous range of voltages when seeking a minimum voltage
one, two, or three power supply design. The user must specify a convergence
threshold V conv that is used to determine when a voltage selection is acceptably
close to minimum. Let V hi and V lo represent the current upper and lower bound
on the supply voltage.
When searching for a minimum single supply voltage, all operations are initially
considered to be free (not fixed to a voltage). When searching for a minimum set of
two or three supply voltages, MOVER considers one power supply at a time. The
voltage will be fixed for any operations not allocated to the supply voltage under
consideration. Table I outlines the voltage search algorithm.
3.6 Partitioning
Partitioning is the process by which MOVER takes all free operations in the DFG
and allocates each to one of two possible power supplies. Partitioning is not performed
until a single minimum supply voltage is known for the group of operations.
supply voltage for the free operations. Choose two
candidate supply voltages (V a and V b ) one slightly above V 1 and the other slightly
below.
Set up the ILP constraint inequalities. Obtain a minimum energy schedule.
Operations will only be assigned to V a if there is schedule slack available. There
may be several ways that the operations can be partitioned. In such a case, the
optimal ILP solution will maximize the energy dissipation of the lower voltage group
(i.e., put the most energy hungry operations in the lower voltage group). This will
tend to maximize the benefit from reducing the voltage of the lower group.
Given a successful partition, operations assigned to V a will be put into the lower
supply voltage group and operations assigned to V b will be put into the higher
supply voltage group.
The partition will fail if all operations are allocated to the lower supply voltage,
all operations are allocated to the higher supply voltage, or the ILP solver exceeds
some resource limit. The first situation indicates that the minimum single voltage
could be a bit lower. In this event, MOVER lowers the values of V a and V b by Vconvand tries the partition again. Lowering V a and V b too far leads to a completely
infeasible ILP problem. The second situation indicates that there is not enough
schedule slack available for any operations to bear a further reduction in voltage.
In this case, MOVER terminates. The only remedies for the third situation are
to either increase resource and time limits on the ILP solver or make the problem
smaller.
4. CHARACTERIZATION OF DATAPATH RESOURCES
The results presented in this paper make use of four types of circuit resources: an
adder, multiplier, register, and level converter. MOVER requires models of the
energy and delay of each type of resource as a function of supply voltage, load
capacitance, and average switching activity. Each type of resource was simulated
in HSPICE using 0.8 micron MOSIS library models with the level 3 MOS model.
Energy dissipation, worst case delay, and input capacitances were measured from
the simulation. All resources were 16 bits wide. Load capacitance on each output
was 0.1pF. Input vectors were generated to provide 50% switching activities.
4.1 Datapath operators and registers
During optimization, operation energies and delays are scaled as a function of the
voltage assignment being evaluated. Energy dissipation (E) for each operator and
register scales with respect to supply voltage as
Table
II. Nominal energy and delay values used by MOVER
Resource Energy d
dC Energy Delay d
Delay Cin
Type [pJ] [pJ/pF] [ns] [ns/pF] [pF]
ADDER 84 200 12.0 3.5 0.021
MULTIPLIER 2966 200 18.5 3.33 0.095
REGISTER 312 200 0.48 2.25 0.045
is the energy dissipation of the operator or register measured at the
nominal supply voltage V 0 .
Delay each operator and register scale with respect to supply voltage as
\Theta
where t p0 is the propagation delay measured at the nominal supply voltage V 0 .
The energy and delay scaling factors were derived directly from the CMOS energy
and delay equations described by Rabaey [Rabaey 1996]. Energy and delay are also
scaled linearly with respect to the estimated load capacitance on output signals.
Table
II gives the model parameters used by MOVER for each type of resource.
Note that the register delay given here is just the propagation time relative to a
clock edge. Register setup time is treated as part of the datapath operator delays.
4.2 Level conversion
Whenever one resource has to drive an input of another resource operating at a
higher voltage, a level conversion is needed at the interface. Four alternatives were
considered to accomplish this: omit the level converter, use a chain of inverters
at successively higher voltages, use an active or passive pullup, or use a differential
cascode voltage switch (DCVS) circuit as a level converter [Chandrakasan
et al. 1994; Usami and Horowitz 1995]. We omit the level converter for step-down
conversions and use the DCVS circuit for step-up conversions. Given appropriate
transistor sizes, this circuit exhibits no static current paths and it can operate over
a full 1.5V to 5.0V range of input and output supply voltages.
A model was needed that could accurately indicate the power dissipation and
propagation delay of the DCVS level converter as a function of the input logic
supply voltage V 1 , output logic supply voltage V 2 , and load capacitance. The circuit
was studied both analytically and from HSPICE simulation results to determine
a suitable form for the model equations. Coefficients of the equations were then
calibrated so that the model equations would produce families of curves closely
matching simulation results for V 1 ranging from 1:5V to 5V and
These are the ranges of supply voltages for which a level converter is needed. Typical
energy dissipation of the level converter was found to be on the order of 5 to 15pJ
per switching event per bit, given a 0.1pF load. Typical propagation delays range
were approximately 1ns for level conversions such as 3.3V to 5V or 2.4V to 3.3V.
Propagation delays become large as the input voltage of the level converter falls
towards 2V T . A 2.5V to 5V conversion had a delay of about 2.5ns. A 2V to 5V
conversion had a delay of nearly 5ns.
All transistors 0.8u length
and 4.0u width except where
noted
M2N
M3N
M1N
IN
Fig. 4. DCVS Level Converter
5. RESULTS
5.1 Datapath examples
ILP schedule optimization results are presented for six example data paths: a four
point FFT (FFT4), the 5th order elliptic wave filter benchmark (ELLIP) [Rao
1992], a 6th order Auto-Regressive Lattice filter (LATTICE), a frequency sampled
filter (FSAMP) with three 2nd order stages and one 1st order stage, a direct form
9 tap linear phase FIR filter (LFIR9), and a 5th order state-space realization of an
IIR filter (SSIIR). In the FFT data path, complex signal paths are split into real
and imaginary data flows. For all other data paths, the signals are modeled as non-complex
integer values. All data flows were taken to be 16 bits wide. Switching
activities at all nodes were assumed to be 50%, i.e., the probability of a transition
on any selected 1 bit signal is 50% in any one sample interval.
Each example was modeled for one sample period with data flow and latency
constraints specified for any feedback signals. Any loops that start and finish
within the same sample period were completely unrolled. Any loops spanning
multiple sample periods were broken. A data flow passing from one sample period
to the next was represented by input and output nodes in the DFG connected by
a backward arc to specify a maximum latency constraint from the input to the
output. A 20ns clock was specified for all examples. Latency constraints were
specified so that the data introduction interval equals the maximum delay from the
input to the output of the data path.
5.2 MOVER Results
Figure
presents energy reduction results. The left-most column identifies the
particular datapath topology and indicates the number of operations (additions,
Name Lat/Clks +/x 1 2 3 [min]
Datapath Max Max Voltages Exec(host) Min Lat.,Unlim. Resources
Energy ratio vs. 1 supply,
Energy
adds NR NR - 0.27(1)
2.3 3.6 - 0.23(1) 1144
2.3 3.6 0.40(1) 1148
2.3 3.6 - 0.48(1) 1233
1.9 2.4 3.6 0.72(1) 1235
26 adds 2.3 3.6 - 1.53(2) 2206
1.9 2.4 3.6 2.65(2) 2181
2.3 3.6 - 3.43(2) 1631
1.9 2.4 3.6 5.30(2) 1600
2.3 3.6 - 3.77(2) 1963
NR NR NR 6.02(2)
2.3 3.6 - 50.6(2) 1237
NR NR NR 101.(2)
adds 3.5 4.9 - 0.72(2) 13904
mults 3.0 3.5 4.9 1.40(2) 12538
NR NR NR 4.10(2)
3.0 3.5 4.9 6.88(2) 14342
2.3 3.0 3.6 59.9(2) 8480
14 adds 4.2 4.8 - 2.28(1) 15882
9 mults NR NR NR 4.57(1)
38 del 11/11 - 3.6 - 2.10(1) 8828
2.4 3.0 3.6 8.85(1) 5401
2.3 3.0 3.6 9.82(1) 6263
2.4 3.0 - 6.00(1) 5768
1.6 3.0 3.0 9.60(1) 6191
8 adds 3.5 4.9 - 0.72(1) 6344
5 mults 3.0 3.5 4.9 1.38(1) 5683
8 del 8/8 - 3.6 - 0.43(1) 4923
2.3 3.6 - 0.98(1) 2415
2.3 3.0 3.6 1.48(1) 3434
2.4 3.0 - 1.47(1) 3213
1.6 3.0 3.1 2.37(1) 3717
adds 3.0 4.9 - 0.52(1) 15770
mults NR NR NR 0.83(1)
2.4 3.0 3.6 4.67(1) 6250
NR NR NR 1.25(1)
Fig. 5. Multi-voltage Energy Savings
multiplications, and sample period delays) performed in one iteration of the data-
path. "Max Lat/Clks" specifies the maximum latency (equal to the data sample
rate) and the maximum number of control steps (Clks), both given in terms of the
number of clock cycles. "Max +/-" specifies the maximum numbers of adder and
multiplier circuits permitted in the design. Values of "-" indicate that unlimited
resources were permitted. The columns headed by "Voltages 1 2 3" indicate
the supply voltages selected by MOVER. A "-" is used to fill voltage columns "2"
or "3" in those cases where a one or two supply voltage result is presented. The
string "NR" in voltage columns "1" and "2" indicates that a solution with two
supply voltages could not be obtained. "NR" in all three columns indicates that a
solution with three supply voltages could not be obtained. The "Exec" column reports
the minutes of execution time (Real, not CPU) required to obtain the result.
The number in parenthesis identifies the type of machine used to obtain the result.
"(1)" indicates a SPARCserver 1000 with 4 processors and 320MB of RAM. "(2)"
indicates a Sparc 5 with 64MB of RAM.
The bar graph down the center represents the normalized energy consumption of
each test case. Each energy result is divided by the single supply voltage, unlimited
resource, minimum latency result to obtain a normalized value. Single supply
voltage results are shown with black bars. All other results are shown in gray. This
style of presentation is intended to visually emphasize the effect of different latency,
resource, and supply voltage constraints on the energy estimate. The right-most
column presents the absolute energy estimate in units of
Figure
6 presents area penalty results. All but two columns have the same meaning
as the corresponding columns in figure 5. The only exceptions are the bar graph
and the "area" column on the right. The "area" value is a weighted sum of the minimum
circuit resources required to implement the datapath schedule. The resources
(all bits wide) were weighted as follows: adder=1, multiplier=16, register=0.75,
and level converter=0.15. These weights are proportional to the transistor count
of each resource. Each area value was divided by the area estimate for the corresponding
single voltage result. Each single voltage result is shown as a black bar.
Two and three voltage results are shown in gray.
5.3 Observations
The preceding results permit several observations to be made regarding the effect
of latency, circuit resource, and supply voltage constraints on energy savings, area
costs, and execution time. Because our primary objective has been to minimize
energy dissipation through use of multiple voltages, we are especially interested in
the comparison of multiple supply voltage results to minimum single supply voltage
results. Energy savings ranging from 0% to 50% were observed when comparing
multiple to single voltage results. Estimated area penalties ranged from a slight
improvement to a 170% increase in area. Actual area penalties could be higher,
since our estimate only considers the number of circuit resources used. There is
not a clear correlation between energy savings and area penalty when looking at
the complete set of results. Sometimes a substantial energy savings was achieved
with minimal increased circuit resources, other times even a small energy savings
incurred a large area cost.
Name Lat/Clks +/x 1 2 3
Area ratio vs. 1 supply [adder=1]
Area
adds NR NR -
2.3 3.6 - 30.4
2.3 3.6 30.4
2.3 3.6 - 22.8
1.9 2.4 3.6 22.8
4/4 12/- 2.3 - 14
26 adds 2.3 3.6 - 12.65
1.9 2.4 3.6 12.8
2.3 3.6 - 13.3
1.9 2.4 3.6 16.9
2.3 3.6 - 12.85
1.9 2.4 3.6
2.3 3.6 - 11.8
1.9 2.4 3.6
adds 3.5 4.9 - 88.8
mults 3.0 3.5 4.9 89.85
NR NR NR
3.0 3.5 4.9 39.1
2.3 3.0 3.6 41.85
14 adds 4.2 4.8 - 136.2
9 mults NR NR NR 169.4
38 del 11/11 - 3.6 - 83.75
2.4 3.0 3.6 120.15
2.3 3.0 3.6 89.95
2.4 3.0 - 88.2
1.6 3.0 3.0 90.1
8 adds 3.5 4.9 - 94.8
5 mults 3.0 3.5 4.9 98.1
8 del 8/8 - 3.6 - 42.75
2.3 3.6 - 87.3
2.3 3.0 3.6 47
2.4 3.0 - 45.55
1.6 3.0 3.1 47.3
adds 3.0 4.9 - 174
mults NR NR NR
2.4 3.0 3.6 181.75
NR NR NR
Fig. 6. Multi-voltage Area Penalties
If we consider the impact of latency constraints alone, effects on area and energy
are easier to observe. In most cases, multiple voltage area penalties were greatest
for the minimum latency unlimited resource test cases. We can also observe that
increasing latency constraints always led to the same or lower energy for a given
number of supply voltages. However, the effect of latency constraints on the single
vs. multiple voltage trade-off varied greatly from one example to another. Results
for multiple voltages are most favorable in situations where the single supply voltage
solution did not benefit from increased latency, perhaps due to a control step
bottleneck such as illustrated earlier in figure 1.
The effect of resource constraints on energy savings are also relatively easy to
observe. Not surprisingly, resource constraints tended to produce the lowest area
penalties. The only reason for any area penalty at all in the resource constrained
case is that sometimes the minimum single supply solution does not require all of
the resources that were permitted. Energy estimates based on resource constrained
schedules were consistently the same or higher than estimates based on unlimited
resource schedules.
The results presented previously do not include energy or area costs associated
with multiplexers that would be required to support sharing of functional units
and registers. However, an analysis of multiplexer requirements for most of these
schedules indicated that multiplexers would not have changed the relative trade-off
between number of voltages, energy dissipation, and circuit area. In a few cases
the energy and area costs were increased substantially (up to 50% for energy and
108% for area), but the comparison between one, two, and three voltages was always
either similar to the earlier results or shifted somewhat in favor of multiple voltages.
The maximum energy savings was 54%, and the average was 32% when comparing
two supply voltages to one. The maximum area penalty was 132% and the average
was 42%. Results for three supply voltages, at best, were only slightly better than
the two supply results.
Multiplexer costs were estimated in the following manner. A simple greedy algorithm
was used to assign a functional unit to each operation and a register to
each data value. Given this resource binding, we determined the fan-in to each
functional unit and register. Assuming a pass-gate multiplexer implementation, we
estimated worst case capacitance on signal paths, total gate capacitance switched
by control lines, and relative circuit area as a function of the fan-in and data bus
width. A single pass gate, turned on, was estimated to add a 5fF load to each data
input bit and 5fF to the control inputs of a multiplexer. The circuit area for a pass
gate was taken to be 0:07\Theta (the area of one bit slice of a full adder). Multiplexer
capacitances and area were added to the costs already used by MOVER. MOVER
was then used to generate a new datapath schedule that accounts for these costs. In
some cases, supply voltages had to be elevated slightly relative to previous results
in order to compensate for increased propagation delays.
6. DESIGN ISSUES
There are several design issues that a designer will need to take into consideration
when a multiple voltage design is targeted for fabrication. In particular, the effects
of multiple voltage operation on IC layout and power supply requirements should
be considered. In this section, we will discuss the issues and identify improvements
that would allow MOVER to more completely take them into account.
6.1 Layout
Following are some ways that multiple voltage design may affect IC layout.
(1) If the multiple supplies are generated off-chip, additional power and ground
pins will be required.
(2) It may be necessary to partition the chip into separate regions, where all operations
in a region operate at the same supply voltage.
(3) Some kind of isolation will be needed between regions operated at different
voltages.
There may be some limit on the voltage difference that can be tolerated between
regions.
Protection against latch-up may be needed at the logic interfaces between regions
of different voltage.
design rules for routing may be needed to deal with signals at one voltage
passing through a region at another voltage.
Isolation requirements between different voltage regions can probably be adequately
addressed by increased use of substrate contacts, separate routing of power
and ground, increased minimum spacing between routes (for example, between one
signal having a 2V swing and another with a 5V swing), and slightly increased
spacing between wells. While these practices will increase circuit area somewhat,
the effect should be small in comparison to increased circuitry (adders, multipliers,
registers, etc.) needed to support parallel operations at reduced supply voltages.
Area for isolation will be further mitigated by grouping together resources at a particular
voltage into a common region. Isolation is then only needed at the periphery
of the region.
Some of these layout issues can be incorporated into multiple voltage scheduling.
Perhaps the greatest impact will be related to grouping operations of a particular
supply voltage into a common region. Closely intermingled operations at different
voltages could lead to complex routing between regions, increased need for level
conversions, and increased risk of latch-up. Assigning highly connected operations
to the same voltage could not only improve routing, but should also lead to fewer
voltage regions on the chip, less space lost to isolation between voltage regions, and
fewer signals passing between regions operating at different voltages.
6.2 Circuit Design
There are some circuit design issues that still need to be addressed by MOVER
including alternative level converter designs and control logic design.
Alternative level converter designs such as the combined register and level converter
should be considered. The DCVS converter design considered in this paper
does not exhibit static power consumption, but short circuit energy is a problem.
Delays and energy also increase greatly as the input voltage to the level converter
becomes small.
MOVER makes assumptions about datapath control and clocking that are convenient
for scheduling and energy estimation, but will require support from the
control logic. It is assumed that the entire control of the datapath is accomplished
through selective clocking of registers and switching of multiplexers. This will require
specially gated clocks for each register.
6.3 Power Supplies
Before implementing a multiple voltage datapath, some decisions must be made
regarding the voltages that can be selected and the type of power supply to be
used. Regarding voltage selection, we must decide how many supplies to use and
determine whether or not non-standard voltages are acceptable. Regarding the type
of power supply, we will only consider the choice between generating the voltage
on-chip or off-chip. All of these choices will depend largely on the application. If
on chip heat dissipation is a primary constraint, voltages would be generated off
chip and DC-DC conversion efficiency would be a low priority. If battery life is the
bottleneck, DC-DC conversion efficiency will determine whether or not multiple
voltages will reap an energy savings.
A simple analysis provides some insight into the conditions under which a new
supply voltage could be justified. In a battery powered system, we would need a
DC to DC converter to obtain the new voltage. Let - represent the efficiency of
the DC to DC converter. The efficiency can be most easily described as the power
output to the datapath divided by the power input to the DC-DC converter.
This model does not explicitly represent the effect of the amount of loading or
choice of voltages on converter efficiency. For now, we are only trying to determine
the degree of converter efficiency needed in order to make a new supply voltage
viable. Conversely, given a DC-DC converter of known efficiency, we want to know
how much voltage reduction is needed to justify use of the converter.
Let ff represent the fraction of switched capacitance in the datapath that will
be allocated to the new supply voltage. V 1 represents the primary supply voltage.
represents the new reduced supply voltage under consideration. E 1 represents
the energy dissipation of the datapath operating with the single supply voltage V 1 .
The energy E 1 can be split into a portion, ff representing the circuitry that will
run at voltage V 2 , and a remaining portion will continue to run at
When the new supply voltage V 2 is introduced, the first term in equation
be scaled by the factor V 2V 2. The new datapath energy dissipation (ignoring DC-DC
becomes:
However, the energy lost in the DC-DC converter equals the energy of the circuitry
operating at V 2 divided by the efficiency of the converter.
A bit of algebraic manipulation will reveal the system energy savings (including
converter losses) as a function of ff, -, V 1 , and V 2 .
lost
Consider a simple example. Let
Suppose 60% of the circuit can operate at voltage V 2 . Given an ideal DC-DC
converter, the energy savings would be 36%. However, when the converter efficiency
is considered, the savings drops more than a half to 17%. The break-even point
occurs when 2. For the last example, the converter efficiency has to be at
least 41% to avoid losing energy. In practice, the break-even point will be somewhat
higher due to logic level conversions that will be required within the datapath.
The preceding analysis suggests that a DC to DC converter does not have to be
exceedingly efficient in order to achieve energy savings. Had the voltage reduction
been merely from 3.3V to 3.0V, DC-DC converter efficiency would have to be at least
83%. Converter designs are available that easily exceed this efficiency requirement.
Stratakos et al. [Stratakos et al. 1994] designed a DC-DC converter that achieves
better than 90% efficiency for a 6V to 1.5V voltage reduction.
7. CONCLUSIONS
In this paper we have presented MOVER, a tool which reduces the energy dissipation
of a datapath design through use of multiple supply voltages. An area
estimate is produced based on the minimum number of circuit resources required
to implement the design. One, two, and three supply voltage designs are generated
for consideration by the circuit designer. The user has control over latency
constraints, resource constraints, total number of control steps, clock period, voltage
range, and number of power supplies. MOVER can be used to examine and
trade-off the effects of each constraint on the energy and area estimates.
MOVER iteratively searches the voltage range for minimum voltages that will be
feasible in a one, two, and three supply solution. An exact ILP formulation is used
to evaluate schedule feasibility for each voltage selection. The same ILP formulation
is used to determine which operations are assigned to each power supply.
MOVER was exercised for six different datapath specifications, each subjected
to a variety of latency, resource, and power supply constraints for a total of 70
test cases. The test cases were modest in size, ranging from 13 to 26 datapath
operations and 2 to 24 control steps. The results indicate that some but not all
datapath specifications can benefit significantly from use of multiple voltages. In
many cases, energy was reduced substantially going from one to two supply voltages.
Improvements as much as 50% were observed, but 20-30% savings were more typical.
Adding a third supply produced relatively little improvement over two supplies, 15%
improvement at most. Results from MOVER are comparable and in many cases
better than results obtained using the MESVS (Minimum Energy Scheduling with
Voltage Selection) ILP formulation presented in [Johnson and Roy 1996]. Behavior
with respect to latency, resource, and supply voltage constraints is similar between
MOVER and MESVS. The improvement relative to a pure ILP formulation is due
to the fact that ILP formulation could only select from a discrete set of voltages,
whereas MOVER can select from a continuous range of voltages.
ACKNOWLEDGMENTS
We would like to thank James Cutler for his programming work, the low power
research group at Purdue, and the anonymous reviewers for their critiques.
--R
Optimizing power using transformations.
Design of portable systems.
Energy minimization using multiple supply voltages.
Synthesis and Optimization of Digital Circuits.
Optimal VLSI Architectural Synthesis: Area
An ILP model for simultaneous scheduling and partitioning for low power system mapping.
Microarchitectural synthesis of performance-constrained
Optimal selection of supply voltages and level conversions during data path scheduling under resource constraints.
Digital integrated circuits
Behavioral synthesis for low power.
An iterative improvementalgorithm for low power data path synthesis.
Variable voltage scheduling.
The fifth order elliptic wave filter benchmark.
Clustered voltage scaling technique for low-power design
--TR
Optimal VLSI architectural synthesis
Power-profiler
Clustered voltage scaling technique for low-power design
Variable voltage scheduling
An iterative improvement algorithm for low power data path synthesis
Digital integrated circuits
Energy minimization using multiple supply voltages
Synthesis and Optimization of Digital Circuits
Profile-Driven Behavioral Synthesis for Low-Power VLSI Systems
Optimal Selection of Supply Voltages and Level Conversions During Data Path Scheduling Under Resource Constraints
Behavioral Synthesis for low Power
Microarchitectural Synthesis of Performance-Constrained, Low-Power VLSI Designs
--CTR
Saraju P. Mohanty , N. Ranganathan , Sunil K. Chappidi, Simultaneous peak and average power minimization during datapath scheduling for DSP processors, Proceedings of the 13th ACM Great Lakes symposium on VLSI, April 28-29, 2003, Washington, D. C., USA
Tohru Ishihara , Hiroto Yasuura, Voltage scheduling problem for dynamically variable voltage processors, Proceedings of the 1998 international symposium on Low power electronics and design, p.197-202, August 10-12, 1998, Monterey, California, United States
Ling Wang , Yingtao Jiang , Henry Selvaraj, Scheduling and optimal voltage selection with multiple supply voltages under resource constraints, Integration, the VLSI Journal, v.40 n.2, p.174-182, February, 2007
Ali Manzak , Chaitali Chakrabarti, A low power scheduling scheme with resources operating at multiple voltages, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.10 n.1, p.6-14, 2/1/2002
Ling Wang , Yingtao Jiang , Henry Selvaraj, Scheduling and Partitioning Schemes for Low Power Designs Using Multiple Supply Voltages, The Journal of Supercomputing, v.35 n.1, p.93-113, January 2006
Dongxin Wen , Ling Wang , Yingtao Jiang , Henry Selvaraj, Power optimization for simultaneous scheduling and partitioning with multiple voltages, Proceedings of the 7th WSEAS International Conference on Mathematical Methods and Computational Techniques In Electrical Engineering, p.156-161, October 27-29, 2005, Sofia, Bulgaria
Ashok Kumar , Magdy Bayoumi , Mohamed Elgamel, A methodology for low power scheduling with resources operating at multiple voltages, Integration, the VLSI Journal, v.37 n.1, p.29-62, February 2004
Amitabh Menon , S. K. Nandy , Mahesh Mehendale, Multivoltage scheduling with voltage-partitioned variable storage, Proceedings of the international symposium on Low power electronics and design, August 25-27, 2003, Seoul, Korea
Woo-Cheol Kwon , Taewhan Kim, Optimal voltage allocation techniques for dynamically variable voltage processors, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Saraju P. Mohanty , N. Ranganathan , Sunil K. Chappidi, ILP models for simultaneous energy and transient power minimization during behavioral synthesis, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.11 n.1, p.186-212, January 2006
Inki Hong , Miodrag Potkonjak , Mani B. Srivastava, On-line scheduling of hard real-time tasks on variable voltage processor, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.653-656, November 08-12, 1998, San Jose, California, United States
Inki Hong , Darko Kirovski , Gang Qu , Miodrag Potkonjak , Mani B. Srivastava, Power optimization of variable voltage core-based systems, Proceedings of the 35th annual conference on Design automation, p.176-181, June 15-19, 1998, San Francisco, California, United States
Ling Wang , Yingtao Jiang , Henry Selvaraj, Multiple voltage synthesis scheme for low power design under timing and resource constraints, Integrated Computer-Aided Engineering, v.12 n.4, p.369-378, October 2005
Shaoxiong Hua , Gang Qu, Approaching the Maximum Energy Saving on Embedded Systems with Multiple Voltages, Proceedings of the IEEE/ACM international conference on Computer-aided design, p.26, November 09-13,
Woo-Cheol Kwon , Taewhan Kim, Optimal voltage allocation techniques for dynamically variable voltage processors, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.1, p.211-230, February 2005
Gang Qu, What is the limit of energy saving by dynamic voltage scaling?, Proceedings of the 2001 IEEE/ACM international conference on Computer-aided design, November 04-08, 2001, San Jose, California
Hsueh-Chih Yang , Lan-Rong Dung, On multiple-voltage high-level synthesis using algorithmic transformations, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Deming Chen , Jason Cong , Yiping Fan , Junjuan Xu, Optimality study of resource binding with multi-Vdds, Proceedings of the 43rd annual conference on Design automation, July 24-28, 2006, San Francisco, CA, USA
Saraju P. Mohanty , N. Ranganathan, Energy-efficient datapath scheduling using multiple voltages and dynamic clocking, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.10 n.2, p.330-353, April 2005
Deming Chen , Jason Cong , Junjuan Xu, Optimal module and voltage assignment for low-power, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Liqiong Wei , Zhanping Chen , Mark Johnson , Kaushik Roy , Vivek De, Design and optimization of low voltage high performance dual threshold CMOS circuits, Proceedings of the 35th annual conference on Design automation, p.489-494, June 15-19, 1998, San Francisco, California, United States
Deming Chen , Jason Cong , Junjuan Xu, Optimal simultaneous module and multivoltage assignment for low power, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.11 n.2, p.362-386, April 2006
Krishnan Srinivasan , Karam S. Chatha, Integer linear programming and heuristic techniques for system-level low power scheduling on multiprocessor architectures under throughput constraints, Integration, the VLSI Journal, v.40 n.3, p.326-354, April, 2007
Diana Marculescu , Anoop Iyer, Application-driven processor design exploration for power-performance trade-off analysis, Proceedings of the 2001 IEEE/ACM international conference on Computer-aided design, November 04-08, 2001, San Jose, California | DSP;low power design;multiple voltage;datapath scheduling;power optimization;scheduling;high-level synthesis;level conversion |
265177 | On Parallelization of Static Scheduling Algorithms. | AbstractMost static algorithms that schedule parallel programs represented by macro dataflow graphs are sequential. This paper discusses the essential issues pertaining to parallelization of static scheduling and presents two efficient parallel scheduling algorithms. The proposed algorithms have been implemented on an Intel Paragon machine and their performances have been evaluated. These algorithms produce high-quality scheduling and are much faster than existing sequential and parallel algorithms. | Introduction
Static scheduling utilizes the knowledge of problem characteristics to reach a global optimal, or
near optimal, solution. Although many people have conducted their research in various manners,
they all share a similar underlying idea: take a directed acyclic graph representing the parallel
program as input and schedule it onto processors of a target machine to minimize the completion
time. This is an NP-complete problem in its general form [7]. Therefore, many hueristic algorithms
that produce satisfactory performance have been proposed [11, 13, 5, 14, 12, 4, 9].
Although these scheduling algorithms apply to parallel programs, the algorithms themselves
are sequential, and are executed on a single processor system. A sequential algorithm is slow.
Scalability of static scheduling is restricted since a large memory space is required to store the
task graph. A natural solution to this problem is using multiprocessors to schedule tasks to
multiprocessors. In fact, without parallelizing the scheduling algorithm and running it on a
parallel computer, a scalable scheduler is not feasible.
A parallel scheduling algorithm should have the following features:
ffl High quality - it is able to minimize the completion time of a parallel program.
ffl Low complexity - it is able to minimize the time for scheduling a parallel program.
These two requirements contradict each other in general. Usually, a high-quality scheduling algorithm
is of a high complexity. The Modified Critical-Path (MCP) algorithm was introduced [13],
which offered a good quality with relatively low complexity. In this paper, we propose two parallelized
versions of MCP. We will describe the MCP algorithm in the next section. Then, we
will discuss different approaches for parallel scheduling, as well as existing parallel algorithms in
section 3. In sections 4 and 5, we will present the VPMCP and HPMCP algorithms, respectively.
A comparison of the two algorithms will be presented in section 6.
2 The MCP Algorithm
A macro dataflow graph is a directed acyclic graph with a starting point and an end point [13]. A
macro dataflow graph consists of a set of nodes fn 1 connected by a set of edges, each
of which is denoted by e(n Each node represents a task, and the weight of a node is the
execution time of the task. Each edge represents a message transferred from one node to another
node, and the weight of the edge is equal to the transmission time of the message. When two
nodes are scheduled to the same processing element (PE), the weight of the edge connecting them
becomes zero.
To define this scheduling algorithm succinctly, we will first define the as-late-as-possible
time of a node. The ALAP time is defined as
T critical is the length of the critical path, and level(n i ) is the length of the longest path from node
n i to the end point, including node n i [6]. In fact, high-quality scheduling algorithms more or less
rely on the ALAP time or level.
The Modified Critical-Path (MCP) algorithm was designed to schedule a macro dataflow graph
on a bounded number of PEs.
The MCP Algorithm
1. Calculate the ALAP time of each node.
2. Sort the node list in an increasing ALAP order. Ties are broken by using the smallest
ALAP time of the successor nodes, the successors of the successor nodes, and so on.
3. Schedule the first node in the list to the PE that allows the earliest start time, considering
idle time slots. Delete the node from the list and repeat Step 3 until the list is empty. 2
In step 3, when determining the start time, idle time slots created by communication delays
are also considered. A node can be inserted to the first feasible idle time slot. This method is
called an insertion algorithm. The MCP algorithm has been compared to four other well-known
scheduling algorithms under the same assumption, that are, ISH [10], ETF [8], DLS [12], and
LAST [3]. It has been shown that MCP performed the best [2].
The complexity of the MCP algorithm is O(n 2 logn), where n is the number of nodes in a graph.
In the second step, the ties can be broken randomly to have a simplified version of MCP. The
scheduling quality only varies a little, but the complexity is reduced to O(n 2 ). In the following,
we will use this simplified version of MCP.
3 Approaches for Parallelization of Scheduling Algorithms
The basic idea behind parallel scheduling algorithms is that instead of identifying one node to be
scheduled each time, we identify a set of nodes that can be scheduled in parallel. In the following,
the PEs that execute a parallel scheduling algorithm are called the physical PEs (PPEs) in order
to distinguish them from the target PEs (TPEs) to which the macro dataflow graph is to be
scheduled. The quality and speed of a parallel scheduler depend on data partitioning. There are
two major data domains in a scheduling algorithm, the source domain and the target domain.
The source domain is the macro-dataflow graph and the target domain is the schedule for target
processors. We consider two approaches for parallel scheduling. The first one is called the vertical
scheme. Each PPE is assigned a set of graph nodes using space domain partitioning. Also, each
maintains schedules for one or more TPEs. The second one is called the horizontal scheme.
Each PPE is assigned a set of graph nodes using time domain partitioning. The resultant schedule
is also partitioned so that each PPE maintains a portion of the schedule of every TPE. Each PPE
schedules its own portion of the graph before all PPEs exchange information with each other to
determine the final schedule. The vertical and horizontal schemes are illustrated in Figure 1. The
task graph is mapped to the time-space domain. Here, we assume that three PPEs schedule the
graph to six TPEs. Thus, in the vertical scheme, each PPE holds schedules of two TPEs. In the
horizontal scheme, each PPE holds a portion of schedules of six TPEs.
The vertical scheme and the horizontal scheme are outlined in Figure 2 and 3, respectively.
In the vertical scheme, PPEs exchange information and schedule graph nodes to TPEs. Frequent
information exchange results in large communication overhead. With horizontal partitioning, each
PPE can schedule its graph partition without exchanging information with another PPE. In the
last step, PPEs exchange information of their sub-schedules and concatenate them to obtain the
final schedule. The problem with this method is that the start times of all partitions other than
the first one are unknown. The time needs to be estimated, and scheduling quality depends on
the estimation.
There is almost no work in designing a parallel algorithm for scheduling. In fact, there is
no algorithm in the vertical scheme yet. The only algorithm in this area is in the horizontal
Space
Time
Space
Time
(a) vertical scheme (b) horizontal scheme
P1Target Processors Target Processors
Physical Processors
Physical
Processors
Task Schedule
Task Schedule
Task Graph Task Graph
Figure
1: Vertical and Horizontal Schemes.
1. Partition the graph into P equal sized sets using space domain partitioning.
2. Every PPE cooperates together to generate a schedule and each PPE maintains schedules
for one or more TPEs.
Figure
2: The Vertical Scheme for Parallel Scheduling.
1. Partition the graph into P equal sized sets using time domain partitioning.
2. Each PPE schedules its graph partition to generate a sub-schedule.
3. PPEs exchange information to concatenate sub-schedules.
Figure
3: The Horizontal Scheme for Parallel Scheduling.
scheme, which is the parallel BSA (PBSA) algorithm [1]. The BSA algorithm takes into account
link contention and communication routing strategy. A PE list is constructed in a breadth-first
order from the PE having the highest degree (pivot PE). This algorithm constructs a schedule
incrementally by first injecting all the nodes to the pivot PE. Then, it tries to improve the start
time of each node by migrating it to the adjacent PEs of the pivot PE only if the migration will
improve the start time of the node. After a node is migrated to another PE, its successors are
also moved with it. Then, the next PE in the PE list is selected to be the new pivot PE. This
process is repeated until all the PEs in the PE list have been considered. The complexity of the
BSA algorithm is O(p 2 en), where p is the number of TPEs, n the number of nodes, and e the
number of edges in a graph.
The PBSA algorithm parallelizes the BSA algorithm in the horizontal scheme. The nodes
in the graph are sorted in a topological order and partitioned into P equal sized blocks. Each
partition of the graph is then scheduled to the target system independently. The PBSA algorithm
resolves the dependencies between the nodes of partitions by calculating an estimated start time
of each parent node belonging to another partition, called the remote parent node (RPN). This
time is estimated to be between the earliest possible start time and the latest possible start time.
After all the partitions are scheduled, the independently developed schedules are concatenated.
The complexity of the PBSA algorithm is O(p 2 en=P 2 ), where P is the number of PPEs.
4 The VPMCP Algorithm
MCP is a list scheduling algorithm. In a list scheduling, nodes are ordered in a list according to
priority. A node at the front of the list is always scheduled first. Scheduling a node depends on the
nodes that were scheduled before this node. Therefore, it is basically a sequential algorithm. Its
heavy dependences make parallelization of MCP very difficult. In MCP, nodes must be scheduled
one by one. However, when scheduling a node, the start times of the node on different TPEs can
be calculated simultaneously. This parallelism can be exploited in the vertical scheme.
If multiple nodes are scheduled simultaneously, the resultant schedule may not be the same as
the one produced by MCP. The scheduling length will vary, and, in general, will be longer than
that produced by the sequential MCP. Exploiting more parallelism may lower scheduling quality.
We will study the degree of quality degradation when increasing parallelism.
In the vertical scheme, multiple nodes may be selected to be scheduled at the same time.
This way, parallelism can be increased and overhead reduced. In the horizontal scheme, different
partitions must be scheduled simultaneously for a parallel execution. Therefore, the resultant
schedule cannot be the same as the sequential one. We call the vertical version of parallel MCP
the VPMCP algorithm and the horizontal version the HPMCP algorithm. The VPMCP algorithm
is described in this section and the HPMCP algorithm will be presented in the next section.
Before describing the VPMCP algorithm, we present a simple parallel version of the MCP
algorithm. This version schedules one node each time so that it produces the same schedule as
the sequential MCP algorithm. Each PPE maintains schedules for one or more TPEs. Therefore,
it is a vertical scheme. We call this algorithm VPMCP1, which is shown in Figure 4. The nodes
are first sorted by the ALAP time and cyclicly divided into P partitions. That is, the nodes in
places of the sorted list are assigned to PPE i. The nodes are scheduled one by
one. Each node is broadcast to all PPEs along with its parent information including the scheduled
TPE number and time. Then, the start times of each node on different TPEs can be calculated
in parallel. The node is scheduled to the TPE that allows the earliest start time. Consequently,
if a PPE has any node that is a child of the newly scheduled node, the corresponding parent
information of the node is updated.
1. (a) Compute the ALAP time of each node and sort the node list in an increasing ALAP
order. Ties are broken randomly.
(b) Divide the node list with cyclic partitioning into equal sized partitions, and each partition
is assigned to a PPE.
2. (a) The PPE that has the first node in the list broadcasts the node, along with its parent
information, to all PPEs.
(b) Each PPE obtains a start time for the node on each of its TPEs. The earliest
start time is obtained by parallel reduction of minimum.
(c) The node is scheduled to the TPE that allows the earliest start time.
(d) The parent information of children of the scheduled node is updated. Delete the node
from the list and repeat this step until the list is empty.
Figure
4: The VPMCP1 Algorithm.
The VPMCP1 algorithm parallelizes the MCP algorithm directly. It produces exactly the
same schedules as MCP. However, since each time only one node is scheduled, parallelism is
limited and granularity is too fine. To solve this problem, a number of nodes could be scheduled
simultaneously to increase granularity and to reduce communication. When some nodes are
scheduled simultaneously, they may conflict with each other. Conflict may result in degradation
of scheduling quality. The following lemma states the condition that allows some nodes to be
scheduled in parallel without reducing scheduling quality.
Lemma 1: When a node is scheduled to its earliest start time without conflicting with its former
nodes, it is scheduled to the same place as it would be scheduled by sequential MCP.
Proof: In the MCP scheduling sequence, a node obtains its earliest start time after all of
its former nodes in the list have been scheduled. When a set of nodes are scheduled in parallel,
each node obtains its earliest start time independently. A node may obtain its earliest start time
which is earlier than the one in MCP scheduling when some of its former nodes in the set have
not been scheduled. In this case, it must conflict with one of its former nodes. Therefore, if a
node is scheduled a place that does not conflict with its former nodes, it obtains the same earliest
start time and is scheduled to the same place as it would be in the MCP scheduling sequence. 2
With this lemma, a set of nodes can obtain their earliest start time simultaneously and be
scheduled accordingly. When a node conflicts with its former nodes, then this node and the rest
of the nodes will not be scheduled before they obtain their new earliest start times. In this way,
more than one node can be scheduled each time. However, many nodes may have the same earliest
start time in the same TPE. Therefore, there could be many conflicts. In most cases, only one
or two nodes can be scheduled each time.
To increase the number of nodes to be scheduled in parallel, we may allow a conflict node
to be scheduled to its sub-optimal place. Therefore, when a node is found to be in conflict with
its former nodes, it will be scheduled to the next non-conflict place. With this strategy, p nodes
can be scheduled each time, where p is the number of TPEs. We use this strategy in our vertical
version of parallel MCP, which is called the VPMCP algorithm. The details of this algorithm is
shown in Figure 5. Besides the sorted node list, a ready list is constructed and sorted by ALAP
times. A set of nodes selected from the ready list are broadcast to all PPEs. The start time of
each node is calculated independently. Then, the start times are made available to every PPE
by another parallel concatenation. Some nodes may compete for the same time slot in a TPE.
This conflict is resolved by the smallest-ALAP-time-first rule. A node that does not get the best
time slot will try its second best place, and so on, until it is scheduled to a non-conflict place.
The time for calculation of the ALAP time and sorting is O(e+n log n). The parallel scheduling
step is of O(n 2 =P ). Therefore, the complexity of the VPMCP algorithm is O(e+n log n+n 2 =P ),
where n is the number of nodes, e the number of edges, and P the number of PPEs. The number
of communications is 2n for VPMCP1 and 2n=p for VPMCP, where p is the number of TPEs.
The VPMCP algorithm is compared to VPMCP1 in Table I. The workload for testing consists
of random graphs of various sizes. We use the same random graph generator in [1]. Three values
of communication-computation-ratio (CCR) were selected to be 0.1, 1, and 10. The weights on
the nodes and edges were generated randomly such that the average value of CCR corresponded
to 0.1, 1, or 10. This set of graphs will be used in the subsequent experiment. In this section, we
use a set of graphs, each of which has 2,000 nodes. Each graph is scheduled to four TPEs.
In the vertical scheme, the number of PPEs cannot be larger than the number of TPEs because
1. (a) Compute the ALAP time of each node and sort the node list in an increasing ALAP
order. Ties are broken randomly.
(b) Divide the node list with cyclic partitioning into equal sized partitions and each partition
is assigned to a PPE. Initialize a ready list consisting of all nodes with no parent.
Sort the list in an increasing ALAP order.
2. (a) The first p nodes in the ready list are broadcast to all PPEs by using the parallel
concatenation operation, along with their parent information. If there are less than p
nodes in the ready list, broadcast the entire ready list.
(b) Each PPE obtains a start time for each node on each of its TPEs. The start times of
each node are made available to every PPE by parallel concatenation.
(c) A node is scheduled to the TPE that allows the earliest start time. If more than one
node competes for the same time slot in a TPE, the node with smaller ALAP time
gets the time slot. The node that does not get the time slot is then scheduled to the
time slot that allows the second earliest start time, ans so on.
(d) The parent information is updated for the children of the scheduled node. Delete these
nodes from the ready list and update the ready list by adding nodes freed by these
nodes. Repeat this step until the ready list is empty.
Figure
5: The VPMCP Algorithm.
Table
I: Comparison of Vertical Strategies
Number Scheduling length Running time (second)
CCR of PPEs VPMCP1 VPMCP VPMCP1 VPMCP
each TPE must be maintained in a single PPE. It can be seen from the table that VPMCP1
produces a better scheduling quality. However, its heavy communication results in low speedup
or no speedup. VPMCP reduces running times and still provides an acceptable scheduling quality.
The scheduling lengths are between 0.3% and 1.2% longer than that produced by VPMCP1.
5 The HPMCP Algorithm
In a horizontal scheme, different partitions must be scheduled simultaneously for a parallel exe-
cution. We call a horizontal version of parallel MCP the HPMCP algorithm, which is shown in
Figure
6.
1. (a) Compute the ALAP time of each node and sort the node list in an increasing ALAP
order. Ties are broken randomly.
(b) Partition the node list into equal sized blocks and each partition is assigned to a PPE.
2. Each PPE applies the MCP algorithm to its partition to produce a sub-schedule. Edges
between a node and its RPNs are ignored.
3. Concatenate each pair of adjacent sub-schedules. Walk through the schedule to determine
the actual start time of each node.
Figure
The HPMCP Algorithm.
In the HPMCP algorithm, the nodes are first sorted by the ALAP time. Therefore, the node
list is in a topological order and is then partitioned into P equal sized blocks to be assigned to P
PPEs. In this way, the graph is partitioned horizontally.
When the graph is partitioned, each PPE will schedule its partition to produce its sub-schedule.
Then, these sub-schedules will be concatenated to form the final schedule. Three problems are to
be addressed for scheduling and concatenation: information estimation, concatenation permuta-
tion, and post-insertion.
Information estimation
The major problem in the horizontal scheme is how to resolve the dependences between par-
titions. In general, a latter partition depends on its former partitions. To schedule partitions in
parallel, each PPE needs schedule information of its former partitions. Since it is impossible to
obtain such information before the schedules of former partitions have been produced, an estimation
is necessary. Although the latter PPE does not know the exact schedules of its former
partitions, an estimation can help a node to determine its earliest start time in the latter partition.
In the PBSA algorithm [1], the start time of each RPN (remote parent node) is estimated.
It is done by calculating two parameters: the earliest possible start time (EPST), and the latest
possible start time (LPST). The EPST of a node is the largest sum of computation times from the
start point to the node, excluding the node itself. The LPST of a node is the sum of computation
times of all nodes scheduled before the node, excluding the node itself. The estimated start time
(EST) of an RPN is defined as ffEPST is equal to 1 if the RPN is on
the critical path. Otherwise, it is equal to the length of the longest path from the start point
through the RPN to the end point, divided by the length of the critical path. Given the estimated
start time of an RPN, it is still necessary to estimate to which TPE the RPN is scheduled. If the
RPN is a critical path node, then it is assumed that it will be scheduled to the same TPE as the
highest level critical path node in the local partition. Otherwise, a TPE is randomly picked to
be the one to which the RPN is scheduled. We call this estimation the PBSA estimation. This
estimation is not necessarily accurate or better than a simpler estimation used in HPMCP.
In HPMCP, we simply ignore all dependences between partitions. Therefore, all entry nodes
in its partition can start at the same time. Furthermore, we assume all schedules of the former
PE end at the same time. We call this estimation the HPMCP estimation.
Table
II: Comparison of Estimation Algorithms
Number Scheduling length Running time (second)
CCR of PPEs HPMCP est. PBSA est. HPMCP est. PBSA est.
Now, we will compare the two approaches of estimation. In this section, the number of nodes
in a graph is 2,000 and each graph is scheduled to four TPEs. The comparison is shown in Table II.
The column "HPMCP est." shows the performance of HPMCP. The column "PBSA est." shows
the performance of HPMCP with PBSA estimation. The scheduling lengths and running times
are compared. The running time of PBSA estimation is longer. Notice that more PPEs produce
longer scheduling lengths. That shows the trade-off between scheduling quality and parallelism.
On the other hand, superlinear speedup was observed due to graph partitioning.
The scheduling length produced by PBSA estimation is always longer than that produced by
HPMCP estimation. It implies that a more complex estimation algorithm cannot promise good
scheduling and a simpler algorithm may be better. However, that is not to say that we should use
the simplest one in the future. It is still possible to find a good estimation to improve performance.
The simple estimation used in HPMCP sets a baseline for future estimation algorithms.
Concatenation permutation
After each PPE produces its sub-schedule, the final schedule is constructed by concatenating
these sub-schedules. Because there is no accurate information of former sub-schedules, it is not
easy to determine the optimal permutation of TPEs between adjacent sub-schedules, that is,
to determine which latter TPE should be concatenated to which former TPE. A hueristics is
necessary. In the PBSA algorithm, a TPE with the earliest node is concatenated to the TPE of
the former sub-schedule that allows the earliest execution. Then, other TPEs are concatenated
to the TPEs in the former sub-schedule in a breadth-first order [1]. In the HPMCP algorithm,
we assume that the start time of each node within its partition is the same. Therefore, the
above algorithm cannot be applied. We simply do not perform permutation of TPEs in HPMCP.
That is, a TPE in the latter sub-schedule is concatenated to the same TPE in the former sub-
schedule. An alternative hueristics can be described as follows: each PPE finds out within its
sub-schedule which TPE has most critical-path nodes and permutes this TPE with TPE 0. With
this permutation, as many critical path nodes as possible are scheduled to the same TPE, and
the critical path length could be reduced. This permutation algorithm is compared to the non-
permutation algorithm in Table III. The time spent on the permutation step causes this algorithm
to be slower since extra time is spent on determine weather a node is on the critical path. In
terms of the scheduling length, the permutation algorithm makes four of the test cases better
than the non-permutation algorithm, and two cases worse. This permutation algorithm does not
improve performance much. Therefore, no permutation is performed in the HPMCP algorithm.
Post-insertion
Finally, we walk through the entire concatenated schedule to determine the actual start time of
each node. Some refinement can be performed in this step. In a horizontal scheme, the latter PPE
is not able to insert nodes to former sub-schedules due to lack of their information. This leads to
Table
III: Comparison of Permutation Algorithms
Number Scheduling length Running time (second)
CCR of PPEs No permut. Permut. No permut. Permut.
some performance loss. It can be partially corrected at the concatenation time by inserting the
nodes of a latter sub-schedule into its former sub-schedules. Improvement of this post-insertion
algorithm is shown in Table III. Compared to non-insertion, the post-insertion algorithm reduces
scheduling length in eight test cases and increases it in four cases. Overall, this post-insertion
algorithm can improve scheduling quality. However, it spends much more time for post-insertion.
In the following, we do not perform this post-insertion.
The time for calculation of the ALAP time and sorting is O(e n). The second step of
parallel scheduling is of O(n 2 =P 2 ), and the third step spends O(e+n 2 =p 2 ) time for post-insertion
and O(e) time for non-insertion. Therefore, the complexity of the HPMCP algorithm without
post-insertion is O(e is the number of nodes and e the number of
edges in a graph, p is the number of TPEs and P the number of PPEs.
6 Performance
The VPMCP and HPMCP algorithms were implemented on Intel Paragon. We present its performance
with three measures: scheduling length, running time, and speedup.
Table
IV: Comparison of Post-insertion Algorithms
Number Scheduling length Running time (second)
CCR of PPEs No insert. Insert. No insert. Insert.
First, performance of the VPMCP algorithm is to be presented. In Tables V and VI, graphs of
1,000, 2,000, 3,000, and 4,000 nodes are scheduled to four TPEs. The scheduling length provides
a measure of scheduling quality. The results shown in Table V are the scheduling lengths and
the ratios of the scheduling lengths produced by the VPMCP algorithm to the scheduling lengths
produced by the MCP algorithm. The ratio was obtained by running the VPMCP algorithm on
2 and 4 PPEs on Paragon and taking the ratios of the scheduling lengths produced by it to those
of MCP running on one PPE. As one can see from this table, there was almost no effect of graph
size on scheduling quality. In most cases, the scheduling lengths of VPMCP are not more than
1% longer than that produced by MCP.
Running time and speedup of the VPMCP algorithm are shown in Table VI. Speedup is defined
by is the sequential execution time of the optimal sequential algorithm and
T P is the parallel execution time. The running times of VPMCP on more than one PPE are
compared with MCP running time on one PPE. The MCP running time on a single processor is
a sequential version without parallelization overhead. The low speedup of VPMCP is caused by
its large number of communications.
The next experiment is to study VPMCP performance of different numbers of TPEs. Tables
VII and VIII show the scheduling lengths and running times of graphs of 4,000 nodes for 2, 4,
Table
V: The scheduling lengths produced by VPMCP and their ratios to those of MCP
Graph size (number of nodes)
CCR Number 1000 2000 3000 4000
of PPEs length ratio length ratio length ratio length ratio
Table
VI: Running time (in second) and speedup of VPMCP
Graph size (number of nodes)
CCR Number 1000 2000 3000 4000
of PPEs time S time S time S time S
8, and 16 TPEs. The difference between the scheduling lengths of VPMCP and MCP are within
1% in most cases. As can be noticed from this table, when the number of TPEs increases, the
scheduling lengths decrease and the ratio only increases slightly. Therefore, scheduling quality
scales up quit well. Higher speedups are obtained with more PPEs.
Next, we study performance of the HPMCP algorithm. The number of TPEs is four in
Tables
IX and X. In Table IX, the ratio was obtained by running the HPMCP algorithm on 2,
4, 8, and 16 PPEs on Paragon and taking the ratios of the scheduling lengths produced by it
to those of MCP running on one PPE. The deterioration in performance of HPMCP is due to
estimation of the start time of RPNs and concatenation. Out of the 48 test cases shown in the
table, there is only one case in which HPMCP performed more than 10% worse than MCP, 31
Table
VII: The scheduling lengths and ratios for different number of TPEs produced by VPMCP
Number of TPEs
CCR Number 2
of PPEs length ratio length ratio length ratio length ratio
Table
VIII: VPMCP running time (in second) and speedup for different number of TPEs
Number of TPEs
CCR Number 2
of PPEs time S time S time S time S
of them are within 1%, and 16 of them are between 1% to 10%. It is of interest that in two
cases, HPMCP produced even better results than MCP. Since MCP is a heuristic algorithm, this
is possible. Sometimes, a parallel version could produce a better result than its corresponding
sequential one.
Running time and speedup of the HPMCP algorithm are shown in Table X. There are some
superlinear speedup cases in the table. That is because HPMCP has lower complexity than MCP.
The complexity of MCP is O(n 2 ). The complexity of HPMCP on P PPEs is O(e+n log n+n 2 =P 2 ).
Therefore, speedup is bounded by P 2 instead of P . Speedup on 16 PPEs is not as good as expected,
because the graph size is not large enough and the relative overhead is large.
Tables
XI and XII show the HPMCP scheduling lengths and running times of graphs of 4,000
nodes for 2, 4, 8, and 16 TPEs. The speedups decrease with the number of TPEs. That is caused
by increasing dependences between TPEs.
Now, we compare performance of VPMCP and HPMCP. Performance shown in Figures 7
and 8 is for graphs of 4,000 nodes. The number of TPEs is the same as the number of PPEs.
Figure
7 shows the percentage of the scheduling length over that produced by MCP. There are
some negative numbers which indicate that scheduling lengths are shorter than that produced
by MCP. For 2 or 4 PPEs, HPMCP produces shorter scheduling lengths. However, for more
PPEs, VPMCP produces better scheduling quality than that produced by HPMCP. In general,
VPMCP provides a more stable scheduling quality. Figure 8 compares the speedups of VPMCP
and HPMCP algorithms. HPMCP is faster than VPMCP with a higher speedup.
After scheduling, the nodes are not in the PPEs where they are to be executed in the horizontal
scheme. A major communication step is necessary to move nodes. However, when the number
of PPEs is equal to the number of TPEs, the vertical scheme can avoid this communication step
because the nodes reside in the PPEs where they are to be executed. It becomes more important
when the scheduling algorithms are used at runtime.
Next, we compare two algorithms in the horizontal scheme, HPMCP and PBSA. The PBSA algorithm
takes into account link contention and communication routing strategy, but HPMCP does
not consider these factors. Therefore, the edge weights in PBSA vary with different topologies,
whereas in HPMCP they are constant. For comparison purposes, we have implemented a simplified
version of PBSA, which assumes that the edge weights are constant. PBSA is much slower
than HPMCP, because its complexity is much higher. The complexity of PBSA is O(p 2 en=P 2 )
and that of HPMCP is O(e about 50 to 120 times faster than
PBSA for this set of graphs. Then, we compare the scheduling lengths produced by HPMCP
and PBSA. The results are shown in Table XIII. In this table, the scheduling lengths produced
by the sequential MCP and BSA algorithms running on a single processor are also compared.
Table
IX: The scheduling lengths produced by HPMCP and their ratios to those of MCP
Graph size (number of nodes)
CCR Number 1000 2000 3000 4000
of PPEs length ratio length ratio length ratio length ratio
Table
X: Running time (in second) and speedup of HPMCP
Graph size (number of nodes)
Table
XI: The scheduling lengths and ratios for different number of TPEs produced by HPMCP
Number of TPEs
CCR Number 2
of PPEs length ratio length ratio length ratio length ratio
Table
XII: HPMCP Running time (in second) and speedup for different number of TPEs
Number of TPEs
4.0%
4.5%
5.0%
Number of PPEs
0.5%
1.0%
1.5%
2.0%
2.5%
3.0%
3.5%
4.0%
4.5%
Number of PPEs
-3.0%
-2.0%
-1.0%
1.0%
2.0%
3.0%
4.0%
Number of PPEs
Figure
7: Comparison of scheduling quality produced by VPMCP and HPMCP
Number of PPEs
Number of PPEs
Number of PPEs
Figure
8: Comparison of speedups of VPMCP and HPMCP
When CCR is 0.1 or 1, the scheduling lengths produced by MCP are slightly shorter than those
produced by BSA. When CCR is 10, MCP is much better than BSA. The parallel versions of
the two algorithms perform very differently. When the number of PPEs increases, the scheduling
lengths produced by HPMCP increases only slightly, but that of PBSA increases significantly.
Figure
9 compares HPMCP and PBSA by the sum of scheduling lengths of four graphs of 1,000,
2,000, 3,000, and 4,000 nodes for different CCRs and different number of PPEs.
Table
XIII: The scheduling lengths produced by HPMCP and PBSA
Graph size (number of nodes)
CCR Number 1000 2000 3000 4000
of PPEs HPMCP PBSA HPMCP PBSA HPMCP PBSA HPMCP PBSA
7 Concluding Remarks
Parallel scheduling is faster and is able to schedule large macro dataflow graphs. Parallel scheduling
is a new approach and is still under development. Many open problems need to be solved.
High-quality parallel scheduling algorithms with low complexity will be developed. It can be
achieved by parallelizing the existing sequential scheduling algorithms or by designing new parallel
scheduling algorithms. We have developed the VPMCP and HPMCP algorithms by parallelizing
the sequential MCP algorithm. Performance of this approach has been studied. Both VPMCP
and HPMCP algorithms are much faster then PBSA. They produce high-quality scheduling in
terms of the scheduling length.
30,000.0
Scheduling
length
Number of PPEs
PBSA
30,000.0
Scheduling
length
Number of PPEs
PBSA
30,000.0
50,000.0
Scheduling
length
Number of PPEs
PBSA
Figure
9: Comparison of scheduling lengths produced by HPMCP and PBSA
Acknowledgments
We are very grateful to Yu-Kwong Kwok and Ishfaq Ahmad for providing their PBSA program
and random graph generator for testing.
--R
A parallel approach to multiprocessor scheduling.
Performance comparison of algorithms for static scheduling of DAGs to multiprocessors.
The LAST algorithm: A heuristics-based static task allocation algorithm
Applications and performance analysis of a compile-time optimization approach for list scheduling algorithms on distributed memory multiprocessors
Scheduling parallel program tasks onto arbitrary target machines.
Task Scheduling in Parallel and Distributed Systems.
Computers and Intractability: A Guide to the Theory of NP-Completeness
Scheduling precedence graphs in systems with interprocessor communication times.
A comparison of multiprocessor scheduling heuristics.
Duplication scheduling heuristics (dsh): A new precedence task scheduler for parallel processor systems.
Partitioning and Scheduling Parallel Programs for Multiprocessors.
A compile-time scheduling heuristic for interconnection-constrained heterogeneous processor architectures
A programming aid for message-passing systems
DSC: Scheduling parallel tasks on an unbounded number of processors.
--TR
--CTR
Sukanya Suranauwarat , Hideo Taniguchi, The design, implementation and initial evaluation of an advanced knowledge-based process scheduler, ACM SIGOPS Operating Systems Review, v.35 n.4, p.61-81, October 2001 | parallel scheduling algorithm;static scheduling;modified critical-path algorithm;macro dataflow graph |
265189 | A Simple Algorithm for Nearest Neighbor Search in High Dimensions. | AbstractThe problem of finding the closest point in high-dimensional spaces is common in pattern recognition. Unfortunately, the complexity of most existing search algorithms, such as k-d tree and R-tree, grows exponentially with dimension, making them impractical for dimensionality above 15. In nearly all applications, the closest point is of interest only if it lies within a user-specified distance $\epsilon.$ We present a simple and practical algorithm to efficiently search for the nearest neighbor within Euclidean distance $\epsilon.$ The use of projection search combined with a novel data structure dramatically improves performance in high dimensions. A complexity analysis is presented which helps to automatically determine $\epsilon$ in structured problems. A comprehensive set of benchmarks clearly shows the superiority of the proposed algorithm for a variety of structured and unstructured search problems. Object recognition is demonstrated as an example application. The simplicity of the algorithm makes it possible to construct an inexpensive hardware search engine which can be 100 times faster than its software equivalent. A C++ implementation of our algorithm is available upon request to search@cs.columbia.edu/CAVE/. | Introduction
Searching for nearest neighbors continues to prove itself as an important problem in many
fields of science and engineering. The nearest neighbor problem in multiple dimensions is
stated as follows: given a set of n points and a novel query point Q in a d-dimensional
space, "Find a point in the set such that its distance from Q is lesser than, or equal to, the
distance of Q from any other point in the set" [ 21 ] . A variety of search algorithms have been
advanced since Knuth first stated this (post-office) problem. Why then, do we need a new
algorithm? The answer is that existing techniques perform very poorly in high dimensional
spaces. The complexity of most techniques grows exponentially with the dimensionality,
d. By high dimensional, we mean when, say d ? 25. Such high dimensionality occurs
commonly in applications that use eigenspace based appearance matching, such as real-time
object recognition [ tracking and inspection [ 26 ] , and feature detection
. Moreover, these techniques require that nearest neighbor search be performed using the
Euclidean distance (or L 2 ) norm. This can be a hard problem, especially when dimensionality
is high. High dimensionality is also observed in visual correspondence problems such as
motion estimation in MPEG coding (d ? estimation in binocular stereo
(d=25-81), and optical flow computation in structure from motion (also d=25-81).
In this paper, we propose a simple algorithm to efficiently search for the nearest neighbor
within distance ffl in high dimensions. We shall see that the complexity of the proposed
algorithm, for small ffl, grows very slowly with d. Our algorithm is successful because it
does not tackle the nearest neighbor problem as originally stated; it only finds points within
distance ffl from the novel point. This property is sufficient in most pattern recognition
problems (and for the problems stated above), because a "match" is declared with high
confidence only when a novel point is sufficiently close to a training point. Occasionally, it
is not possible to assume that ffl is known, so we suggest a method to automatically choose
ffl. We now briefly outline the proposed algorithm.
Our algorithm is based on the projection search paradigm first used by Friedman
Friedman's simple technique works as follows. In the preprocessing step, d dimensional
training points are ordered in d different ways by individually sorting each of their coordi-
nates. Each of the d sorted coordinate arrays can be thought of as a 1-D axis with the entire
d dimensional space collapsed (or projected) onto it. Given a novel point Q, the nearest
neighbor is found as follows. A small offset ffl is subtracted from and added to each of Q's
coordinates to obtain two values. Two binary searches are performed on each of the sorted
arrays to locate the positions of both the values. An axis with the minimum number of
points in between the positions is chosen. Finally, points in between the positions on the
chosen axis are exhaustively searched to obtain the closest point. The complexity of this
technique is roughly O(ndffl) and is clearly inefficient in high d.
This simple projection search was improved upon by Yunck utilizes a precomputed
data structure which maintains a mapping from the sorted to the unsorted (original)
coordinate arrays. In addition to this mapping, an indicator array of n elements is used.
Each element of the indicator array, henceforth called an indicator, corresponds to a point.
At the beginning of a search, all indicators are initialized to the number '1'. As before, a
small offset ffl is subtracted from and added to each of the novel point Q's coordinates to
obtain two values. Two binary searches are performed on each of the d sorted arrays to
locate the positions of both the values. The mapping from sorted to unsorted arrays is used
to find the points corresponding to the coordinates in between these values. Indicators corresponding
to these points are (binary) shifted to the left by one bit and the entire process
repeated for each of the d dimensions. At the end, points whose indicators have the value
must lie within an 2ffl hypercube. An exhaustive search can now be performed on the
hypercube points to find the nearest neighbor.
With the above data structure, Yunck was able to find points within the hypercube using
primarily integer operations. However, the total number of machine operations required (in-
teger and floating point) to find points within the hypercube are similar to that of Friedman's
algorithm (roughly O(ndffl)). Due to this and the fact that most modern CPUs do not significantly
penalize floating point operations, the improvement is only slight (benchmarked in a
later section). We propose a data structure that significantly reduces the total number of machine
operations required to locate points within the hypercube to roughly O(nffl
)).
Moreover, this data structure facilitates a very simple hardware implementation which can
result in a further increase in performance by two orders of magnitude.
Previous Work
Search algorithms can be divided into the following broad categories: (a) Exhaustive search,
(b) hashing and indexing, (c) static space partitioning, (d) dynamic space partitioning, and
randomized algorithms. The algorithm described in this paper falls in category (d). The
algorithms can be further categorized into those that work in vector spaces and those that
work in metric spaces. Categories (b)-(d) fall in the former, while category (a) falls in the
later. Metric space search techniques are used when it is possible to somehow compute a
distance measure between sample "points" or pieces of data but the space in which the points
reside lacks an explicit coordinate structure. In this paper, we focus only on vector space
techniques. For a detailed discussion on searching in metric spaces, refer to [
Exhaustive search, as the term implies, involves computing the distance of the novel
point from each and every point in the set and finding the point with the minimum distance.
This approach is clearly inefficient and its complexity is O(nd). Hashing and indexing are
the fastest search techniques and run in constant time. However, the space required to store
an index table increases exponentially with d. Hence, hybrid schemes of hashing from a
high dimensional space to a low (1 or 2) dimensional space and then indexing in this low
dimensional space have been proposed. Such a dimensionality reduction is called geometric
hashing . The problem is that, with increasing dimensionality, it becomes difficult
to construct a hash function that distributes data uniformly across the entire hash table
(index). An added drawback arises from the fact that hashing inherently partitions space
into bins. If two points in adjacent bins are closer to each other than a third point within
the same bin. A search algorithm that uses a hash table, or an index, will not correctly find
the point in the adjacent bin. Hence, hashing and indexing are only really effective when
the novel point is exactly equal to one of the database points.
Space partitioning techniques have led to a few elegant solutions to multi-dimensional
search problems. A method of particular theoretical significance divides the search space into
polygons. A Voronoi polygon is a geometrical construct obtained by intersecting
perpendicular bisectors of adjacent points. In a 2-D search space, Voronoi polygons allow
the nearest neighbor to be found in O(log 2
n) operations, where, n is the number of points in
the database. Unfortunately, the cost of constructing and storing Voronoi diagrams grows
exponentially with the number of dimensions. Details can be found in [ 3
Another algorithm of interest is the 1-D binary search generalized to d dimensions [ 11 ] . This
runs in O(log 2 n) time but requires storage O(n 4 ), which makes it impractical for n ? 100.
Perhaps the most widely used algorithm for searching in multiple dimensions is a static
space partitioning technique based on a k dimensional binary search tree, called the k-d tree
. The k-d tree is a data structure which partitions space using hyperplanes placed
perpendicular to the coordinate axes. The partitions are arranged hierarchically to form a
tree. In its simplest form, a k-d tree is constructed as follows. A point in the database
is chosen to be the root node. Points lying on one side of a hyperplane passing through
the root node are added to the left child and the points on the other side are added to
the right child. This process is applied recursively on the left and right children until a
small number of points remain. The resulting tree of hierarchically arranged hyperplanes
induces a partition of space into hyper-rectangular regions, termed buckets, each containing
a small number of points. The k-d tree can be used to search for the nearest neighbor as
follows. The k coordinates of a novel point are used to descend the tree to find the bucket
which contains it. An exhaustive search is performed to determine the closest point within
that bucket. The size of a "query" hypersphere is set to the distance of this closest point.
Information stored at the parent nodes is used to determine if this hypersphere intersects
with any other buckets. If it does, then that bucket is exhaustively searched and the size
of the hypersphere is revised if necessary. For fixed d, and under certain assumptions about
the underlying data, the k-d tree requires O(nlog 2 n) operations to construct and O(log 2 n)
operations to search [
k-d trees are extremely versatile and efficient to use in low dimensions. However, the
performance degrades exponentially 1 with increasing dimensionality. This is because, in
high dimensions, the query hypersphere tends to intersect many adjacent buckets, leading
to a dramatic increase in the number of points examined. k-d trees are dynamic data
structures which means that data can be added or deleted at a small cost. The impact
of adding or deleting data on the search performance is however quite unpredictable and
is related to the amount of imbalance the new data causes in the tree. High imbalance
1 Although this appears contradictory to the previous statement, the claim of O(log 2 n) complexity is
made assuming fixed d and varying n [ . The exact relationship between d and complexity has not
yet been established, but it has been observed by us and many others that it is roughly exponential.
generally means slower searches. A number of improvements to the basic algorithm have
been suggested. Friedman recommends that the partitioning hyperplane be chosen such that
it passes through the median point and is placed perpendicular to the coordinate axis along
whose direction the spread of the points is maximum [ 15 ] . Sproull suggests using a truncated
distance computation to increase efficiency in high dimensions [ 36 ] . Variants of the k-d tree
have been used to address specific search problems
An R-tree is also a space partitioning structure, but unlike k-d trees, the partitioning
element is not a hyperplane but a hyper-rectangular region [ This hierarchical rectangular
structure is useful in applications such as searching by image content [
needs to locate the closest manifold (or cluster) to a novel manifold (or cluster). An R-tree
also addresses some of the problems involved in implementing k-d trees in large disk based
databases. The R-tree is also a dynamic data structure, but unlike the k-d tree, the search
performance is not affected by addition or deletion of data. A number of variants of R-Trees
improve on the basic technique, such as packed R-trees [ 34
Although R-trees are useful in implementing sophisticated queries and managing large
databases, the performance of nearest neighbor point searches in high dimensions is very
similar to that of k-d trees; complexity grows exponentially with d.
Other static space partitioning techniques have been proposed such as branch and bound
none of which significantly improve
performance for high dimensions. Clarkson describes a randomized algorithm which finds
the closest point in d dimensional space in O(log 2 n) operations using a RPO (randomized
post . However, the time taken to construct the RPO tree is O(n dd=2e(1+ffl) )
and the space required to store it is also O(n dd=2e(1+ffl) ). This makes it impractical when the
number of points n is large or or if d ? 3.
3 The Algorithm
3.1 Searching by Slicing
We illustrate the proposed high dimensional search algorithm using a simple example in 3-D
space, shown in Figure 1. We call the set of points in which we wish to search for the closest
point as the point set. Then, our goal is to find the point in the point set that is closest to
2e
2e
2e
2e
x-e x+e
z-e
z+e
Y
Z
Figure
1: The proposed algorithm efficiently finds points inside a cube of size 2ffl around the novel
query point Q. The closest point is then found by performing an exhaustive search within the cube
using the Euclidean distance metric.
a novel query point Q(x; and within a distance ffl. Our approach is to first find all the
points that lie inside a cube (see Figure 1) of side 2ffl centered at Q. Since ffl is typically small,
the number of points inside the cube is also small. The closest point can then be found by
performing an exhaustive search on these points. If there are no points inside the cube, we
know that there are no points within ffl.
The points within the cube can be found as follows. First, we find the points that are
sandwiched between a pair of parallel planes X 1 and X 2 (see Figure 1) and add them to a
list, which we call the candidate list. The planes are perpendicular to the first axis of the
coordinate frame and are located on either side of point Q at a distance of ffl. Next, we trim
the candidate list by discarding points that are not also sandwiched between the parallel
pair of planes Y 1 and Y 2 , that are perpendicular to X 1 and X 2 , again located on either side
of Q at a distance ffl. This procedure is repeated for planes Z 1 and Z 2 , at the end of which,
the candidate list contains only points within the cube of size 2ffl centered on Q.
Since the number of points in the final trimmed list is typically small, the cost of the
exhaustive search is negligible. The major computational cost in our technique is therefore
in constructing and trimming the candidate list.
3.2 Data Structure
Candidate list construction and trimming can done in a variety of ways. Here, we propose
a method that uses a simple pre-constructed data structure along with 1-D binary searches
to efficiently find points sandwiched between a pair of parallel hyperplanes. The data
structure is constructed from the raw point set and is depicted in Figure 2. It is assumed
that the point set is static and hence, for a given point set, the data structure needs to be
constructed only once. The point set is stored as a collection of d 1-D arrays, where the j th
array contains the j th coordinate of the points. Thus, in the point set, coordinates of a point
lie along the same row. This is illustrated by the dotted lines in Figure 2. Now suppose that
novel point Q has coordinates . Recall that in order to construct the candidate
list, we need to find points in the point set that lie between a pair of parallel hyperplanes
separated by a distance 2ffl, perpendicular to the first coordinate axis, and centered at
that is, we need to locate points whose first coordinate lies between the limits
ffl. This can be done with the help of two binary searches, one for each limit, if the
coordinate array were sorted beforehand.
To this end, we sort each of the d coordinate arrays in the point set independently to
obtain the ordered set. Unfortunately, sorting raw coordinates does not leave us with any
information regarding which points in the arrays of the ordered set correspond to any given
point in the point set, and vice versa. For this purpose, we maintain two maps. The backward
map maps a coordinate in the ordered set to the corresponding coordinate in the point set
and, conversely, the forward map maps a point in the point set to a point in the ordered
set. Notice that the maps are simple integer arrays; if P [d][n] is the point set, O[d][n] is
the ordered set, F [d][n] and B[d][n] are the forward and backward maps, respectively, then
Using the backward map, we find the corresponding points in the point set (shown as
dark shaded areas) and add the appropriate points to the candidate list. With this, the
construction of the candidate list is complete. Next, we trim the candidate list by iterating
POINT SET
ORDERED SET
Dimensions
Points
Forward
Map
Forward
Map
Backward
Map
Input
-e
+e
-e
-e
+e
+e
,.,
Figure
2: Data structures used for constructing and trimming the candidate list. The point set
corresponds to the raw list of data points, while in the ordered set each coordinate is sorted. The
forward and backward maps enable efficient correspondence between the point and ordered sets.
on as follows. In iteration k, we check every point in the candidate list, by
using the forward map, to see if its k th coordinate lies within the limits
Each of these limits are also obtained by binary search. Points with k th coordinates that lie
outside this range (shown in light grey) are discarded from the list.
At the end of the final iteration, points remaining on the candidate list are the ones
which lie inside a hypercube of side 2ffl centered at Q. In our discussion, we proposed
constructing the candidate list using the first dimension, and then performing list trimming
using dimensions 2; 3; d, in that order. We wish to emphasize that these operations can
be done in any order and still yield the desired result. In the next section, we shall see that it
is possible to determine an optimal ordering such that the cost of constructing and trimming
the list is minimized.
It is important to note that the only operations used in trimming the list are integer
comparisons and memory lookups. Moreover, by using the proposed data structure, we have
limited the use of floating point operations to just the binary searches needed to find the
row indices corresponding to the hyperplanes. This feature is critical to the efficiency of the
proposed algorithm, when compared with competing ones. It not only facilitates a simple
software implementation, but also permits the implementation of a hardware search engine.
As previously stated, the algorithm needs to be supplied with an "appropriate" ffl prior
to search. This is possible for a large class of problems (in pattern recognition, for instance)
where a match can be declared only if the novel point Q is sufficiently close to a database
point. It is reasonable to assume that ffl is given a priori, however, the choice of ffl can prove
problematic if this is not the case. One solution is to set ffl large, but this might seriously
impact performance. On the other hand, a small ffl could result in the hypercube being
empty. How do we determine an optimal ffl for a given problem? How exactly does ffl affect
the performance of the algorithm? We seek answers to these questions in the following
section.
4 Complexity
In this section, we attempt to analyze the computational complexity of data structure stor-
age, construction and nearest neighbor search. As we saw in the previous section, constructing
the data structure is essentially sorting d arrays of size n. This can be done in
O(dn log 2 n) time. The only additional storage necessary is to hold the forward and bacward
maps. This requires space O(nd). For nearest neighbor search, the major computational cost
is in the process of candidate list construction and trimming. The number of points initially
added to the candidate list depends not only on ffl, but also on the distribution of data in the
point set and the location of the novel point Q. Hence, to facilitate analysis, we structure
the problem by assuming widely used distributions for the point set. The following notation
is used. Random variables are denoted by uppercase letters, for instance, Q. Vectors are in
bold, such as, q. Suffixes are used to denote individual elements of vectors, for instance, Q k
is the k th element of vector Q. Probability density is written as
and as f Q (q) if Q is continuous.
2e
Axis c
Figure
3: The projection of the point set and the novel point onto one of the dimensions of the
search space. The number of points inside bin B is given by the binomial distribution.
Figure
3 shows the novel point Q and a set of n points in 2-D space drawn from a known
distribution. Recall that the candidate list is initialized with points sandwiched between
a hyperplane pair in the first dimension, or more generally, in the c th dimension. This
corresponds to the points inside bin B in Figure 3, where the entire point set and Q are
projected to the c th coordinate axis. The boundaries of bin B are where the hyperplanes
intersect the axis c, at c be the number of points in bin B. In order
to determine the average number of points added to the candidate list, we must compute
c to be the distance between Q c and any point on the candidate list. The
distribution of Z c may be calculated from the the distribution of the point set. Define P c
to be the probability that any projected point in the point set is within distance ffl from Q c ;
that is,
It is now possible to write an expression for the density of M c in terms of P c . Irrespective of
the distribution of the points, M c is binomially distributed
2 This is equivalent to the elementary probability problem: given that a success (a point is within bin
B) can occur with probability P c
, the number of successes that occur in n independent trials (points) is
binomially distributed.
From the above expression, the average number of points in bin B, E[M c j Q c ], is easily
determined to be
Note that E[M c j Q c ] is itself a random variable that depends on c and the location of Q. If
the distribution of Q is known, the expected number of points in the bin can be computed as
we perform one lookup in the backward map for every point
between a hyperplane pair, and this is the main computational effort, equation (3) directly
estimates the cost of candidate list construction.
Next, we derive an expression for the total number of points remaining on the candidate
list as we trim through the dimensions in the sequence c 1 . Recall that in the
iteration k, we perform a forward map lookup for every point in the candidate list and see if
it lies between the c k
th hyperplane pair. How many points on the candidate list lie between
this hyperplane pair? Once again, equation (3) can be used, this time replacing n with the
number of points on the candidate list rather than the entire point set. We assume that the
point set is independently distributed. Hence, if N k is the total number of points on the
candidate list before the iteration k,
Y
Define N to be the total cost of constructing and trimming the candidate list. For each
trim, we need to perform one forward map lookup and two integer comparisons. Hence, if
we assign one cost unit to each of these operations, an expression for N can be written with
the aid of equation (4) as
Y
which, on the average is
Y
Equation (6) suggests that if the distributions f Q (q) and f Z (z) are known, we can compute
the average cost E[N in terms of ffl. In the next section, we shall examine two
cases of particular interest: (a) Z is uniformly distributed, and (b) Z is normally distributed.
Note that we have left out the cost of exhaustive search on points within the final hypercube.
The reason is that the cost of an exhaustive search is dependant on the distance metric used.
This cost is however very small and can be neglected in most cases when n AE d. If it needs
to be considered, it can be added to equation (6).
We end this section by making an observation. We had mentioned earlier that it is of
advantage to examine the dimensions in a specific order. What is this order? By expanding
the summation and product and by factoring terms, equation (5) can be rewritten as
It is immediate that the value of N is minimum when P c 1
. In other
words, should be chosen such that the numbers of sandwiched points between
hyperplane pairs are in ascending order. This can be easily ensured by simply sorting
the numbers of sandwiched points. Note that there are only d such numbers, which can
be obtained in time O(d) by simply taking the difference of the indices to the ordered
set returned by each pair of binarysearchs. Further, the cost of sorting these numbers
is O(d log 2 d) by heapsort these costs are negligible in any problem of
reasonable dimensionality.
4.1 Uniformly Distributed Point Set
We now look at the specific case of a point set that is uniformly distributed. If X is a point
in the point set, we assume an independent and uniform distribution with extent l on each
of it's coordinates as
1=l if \Gammal=2 - x - l=2
1.5e+062.5e+06Cost
e
d=5
e5000001.5e+062.5e+06Cost
n=50000
n=150000
(a) (b)
Figure
4: The average cost of the algorithm is independent of d and grows only linearly for small
ffl. The point set in both cases is assumed to be uniformly distributed with extent l = 1. (a) The
point set contains 100,000 points in 5-D, 10-D, 15-D, 20-D and 25-D spaces. (b) The point set is
15-D and contains 50000, 75000, 100000, 125000 and 150000 points.
Using equation (8) and the fact that Z , an expression for the density of Z c can
be written as
f Zc jQc
P c can now be written as
\Gammaffl
f Zc jQc (z)dz
\Gammaffll
dz
l
Substituting equation (10) in equation (6) and considering the upper bound (worst case),
we get
l
l
l
l
l
l
l
By neglecting constants, we write
d=5
Cost
e
n=50000
Cost
(a) (b)
Figure
5: The average cost of the algorithm is independent of d and grows only linearly for small
ffl. The point set in both cases is assumed to be normally distributed with variance oe = 1. (a) The
point set contains 100,000 points in 5-D, 10-D, 15-D, 20-D and 25-D spaces (b) The point
set is 15-D and contains 50000, 75000, 100000, 125000 and 150000 points
For small ffl, we observe that ffl d - 0, because of which cost is independent of d:
In
Figure
4, equation (11) is plotted against ffl for different d (Figure 4(a)) and different
Figure
1. Observe that as long as ffl ! :25, the cost varies little with d
and is linearly proportional to n. This also means that keeping ffl small is crucial to the
performance of the algorithm. As we shall see later, ffl can in fact be kept small for many
problems. Hence, even though the cost of our algorithm grows linearly with n, ffl is small
enough that in many real problems, it is better to pay this price of linearity, rather than an
exponential dependence on d.
4.2 Normally Distributed Point Set
Next, we look at the case when the point set is normally distributed. If X is a point in the
point set, we assume an independent and normal distribution with variance oe on each of it's
coordinates:
f Xc (x) =p
2-oe
exp
As before, using Z , an expression for the density of Z c can be obtained to get
f Zc jQc
2-oe
P c can then be written as
\Gammaffl
f Zc jQc (z)dz
oe
oe
p!
This expression can be substituted into equation (6) and evaluated numerically to estimate
cost for a given Q. Figure 5 shows the cost as a function of ffl for As
with uniform distribution, we observe that when ffl ! 1, the cost is nearly independent of d
and grows linearly with n. In a variety of pattern classification problems, data take the form
of individual Gaussian clusters or mixtures of Gaussian clusters. In such cases, the above
results can serve as the basis for complexity analysis.
5 Determining ffl
It is apparent from the analysis in the preceding section that the cost of the proposed
algorithm depends critically on ffl. Setting ffl too high results in a huge increase in cost with
d, while setting ffl too small may result in an empty candidate list. Although the freedom
to choose ffl may be attractive in some applications, it may prove non-intuitive and hard in
others. In such cases, can we automatically determine ffl so that the closest point can be
found with high certainty? If the distribution of the point set is known, we can.
We first review well known facts about L p norms. Figure 6 illustrates these norms for
a few selected values of p. All points on these surfaces are equidistant (in the sense of the
respective norm) from the central point. More formally, the L p distance between two vectors
a and b is defined as
These distance metrics are also known as Minkowski-p metrics. So how are these relevant
to determining ffl? The L 2 norm occurs most frequently in pattern recognition problems.
Unfortunately, candidate list trimming in our algorithm does not find points within L 2 , but
Figure
An illustration of various norms, also known as Minkowski p-metrics. All points on these
surfaces are equidistant from the central point. The L1 metric bounds L p for all p.
within L1 (i.e. the hypercube). Since L1 bounds L 2 , one can naively perform an exhaustive
search inside L1 . However, as seen in figure 7(a), this does not always correctly find the
closest point. Notice that P 2
is closer to Q than P 1
, although an exhaustive search within
the cube will incorrectly identify
to be the closest. There is a simple solution to this
problem. When performing an exhaustive search, impose an additional constraint that only
points within an L 2 radius ffl should be considered (see figure 7(b)). This, however, increases
the possibility that the hypersphere is empty. In the above example, for instance, P 1 will
be discarded and we would not be able to find any point. Clearly then, we need to consider
this fact in our automatic method of determining ffl which we describe next.
We propose two methods to automatically determine ffl. The first computes the radius of
the smallest hypersphere that will contain at least one point with some (specified) probability.
ffl is set to this radius and the algorithm proceeds to find all points within a circumscribing
hypercube of side 2ffl. This method is however not efficient in very high dimensions; the reason
being as follows. As we increase dimensionality, the difference between the hypersphere and
hypercube volumes becomes so great that the hypercube "corners" contain far more points
than the inscribed hypersphere. Consequently, the extra effort necessary to perform L 2
distance computations on these corner points is eventually wasted. So rather than find the
circumscribing hypercube, in our second method, we simply find the length of a side of the
smallest hypercube that will contain at least one point with some (specified) probability. ffl
can then be set to half the length of this side. This leads to the problem we described earlier
that, when searching some points outside a hypercube can be closer in the L 2 sense than
points inside. We shall now describe both the methods in detail and see how we can remedy
e
r
2e
e
(a) (b)
Figure
7: An exhaustive search within a hypercube may yield an incorrect result. (a) P 2 is closer
to
, but just an exhaustive search within the cube will incorrectly identify
as the
closest point. (b) This can be remedied by imposing the constraint that the exhaustive search
should consider only points within an L 2 distance ffl from Q (given that the length of a side of the
hypercube is 2ffl).
this problem.
5.1 Smallest Hypersphere Method
Let us now see how to analytically compute the minimum size of a hypersphere given that
we want to be able guarantee that it is non empty with probability p. Let the radius of such
a hypersphere be ffl hs . Let M be the total number of points within this hypersphere. Let Q
be the novel point and define kZk to be the L 2 distance between Q and any point in the
point set. Once again, M is binomially distributed with the density
Now, the probability p that there is at least one point in the hypersphere is simply
e hs
r
(a) (b)
Figure
8: ffl can be computed using two methods: (a) By finding the radius of the smallest hyper-sphere
that will contain at least one point with high probability. A search is performed by setting
ffl to this radius and constraining the exhaustive search within ffl. (b) By finding the size of the
smallest hypercube that will contain at least one point with high probability. When searching, ffl is
set to half the length of a side. Additional searches have to be performed in the areas marked in
bold.
The above equation suggests that if we know Q, the density f Z jQ (z), and the probability
p, we can solve for ffl hs .
For example, consider the case when the point set is uniformly distributed with density
given by equation (9). The cumulative distribution function of kZk is the uniform
distribution integrated within a hypersphere; which is simply it's volume. Thus,
l d d\Gamma(d=2)
Substituting the above in equation 19 and solving for ffl hs , we get
l d d\Gamma(d=2)
Using equation (21), ffl hs is plotted against probability for two cases. In figure 9(a), d is fixed
to different values between 5 to 25 with n is fixed to 100000, and in figure 9(b), n is fixed
to different values between 50000 to 150000 with d fixed to 5. Both the figures illustrate
an important property, which is that large changes in the probability p result in very small
Probability of Success0.20.61Epsilon
d=5
Probability of Success0.050.15
Epsilon
n=150000
n=50000
(a) (b)
Figure
9: The radius ffl necessary to find a point inside a hypersphere varies very little with
probability. This means that ffl can be set to the knee where probability is close to unity. The point
set in both cases is uniformly distributed with extent l = 1. (a) The point set contains 100000
points in 5, 10, 15, 20 and 25 dimensional space. (b) The point is 5-D and contains 50000, 75000,
100000, 125000 and 150000 points.
changes in ffl hs . This suggests that ffl hs can be set to the right hand "knee" of both the curves
where probability is very close to unity. In other words, it is easy to guarantee that at least
one point is within the hypersphere. A search can now be performed by setting the length
of a side of the circumscribing hypercube to 2ffl hs and by imposing an additional constraint
during exhaustive search that only points within an L 2 distance ffl hs be considered.
5.2 Smallest Hypercube Method
As before, we attempt to analytically compute the size of the smallest hypercube given
that we want to be able guarantee that it is non empty with probability p. Let M be the
number of points within a hypercube of size 2ffl hc . Define Z c to be the distance between the
c th coordinate of a point set point and the novel point Q. Once again, M is binomially
distributed with the density
Y
!k/
d
Y
kC A :(22)
Now, the probability p that there is at least one point in the hypercube is simply
Probability of Success0.10.30.5
Epsilon
d=5
Probability of Success0.050.150.250.35
Epsilon n=150000
n=50000
(a) (b)
Figure
10: The value of ffl necessary to find a point inside a hypercube varies very little with
probability. This means that ffl can be set to the knee where probability is close to unity. The point
set in both cases is uniformly distributed with extent l = 1. (a) The point set contains 100000
points in 5, 10, 15, 20 and 25 dimensional space. (b) The point set is 5-D and contains 50000,
75000, 100000, 125000 and 150000 points.
d
Y
!n
Again, above equation suggests that if we know Q, the density f Zc jQc (z), and the probability
p, we can solve for ffl hc . If for the specific case that the point set is uniformly distributed, an
expression for ffl hc can be obtained in closed form as follows. Let the density of the uniform
distribution be given by equation (9). Using equation (10) we get,
d
Y
l
Substituting the above in equation (23) and solving for ffl hc , we get
li
Using equation (25), ffl hc is plotted against probability for two cases. In figure 10(a), d is fixed
to different values between 5 to 25 with n is fixed to 100000, and in figure 10(b), n is fixed to
different values between 50000 to 150000 with d fixed to 5. These are similar to the graphs
obtained in the case of a hypersphere and again, ffl hc can be set to the right hand "knee" of
both the curves where probability is very close to unity. Notice that the value of ffl hc required
for the hypercube is much smaller than that required for the hypersphere, especially in high
d. This is precisely the reason why we prefer the second (smallest hypercube) method.
Recall that it is not sufficient to simply search for the closest point within a hypercube
because a point outside can be closer than a point inside. To remedy this problem, we
suggest the following technique. First, an exhaustive search is performed to compute the
distance to the closest point within the hypercube. Call this distance r. In figure 8(b),
the closest point
within the hypercube is at a distance of r from Q. Clearly, if a closer
point exists, it can only be within a hypersphere of radius r. Since parts of this hypersphere
lie outside the original hypercube, we also search in the hyper-rectangular regions shown in
bold (by performing additional list trimmings). When performing an exhaustive search in
each of these hyper-rectangles, we impose the constraint that a point is considered only if it
is less than distance r from Q. In figure 8(b), P 2
is present in one such hyper-rectangular
region and happens to be closer to Q than P 1 . Although this method is more complicated,
it gives excellent performance in sparsely populated high dimensional spaces (such as a high
dimensional uniform distribution).
To conclude, we wish to emphasize that both the hypercube and hypersphere methods can
be used interchangeably and both are guaranteed to find the closest point within ffl. However,
the choice of which one of these methods to use should depend on the dimensionality of
the space and the local density of points. In densely populated low dimensional spaces,
the hypersphere method performs quite well and searching the hyper-rectangular regions
is not worth the additional overhead. In sparsely populated high dimensional spaces, the
effort needed to exhaustively search the huge circumscribing hypercube is far more than the
overhead of searching the hyper-rectangular regions. It is, however, difficult to analytically
predict which one of these methods suits a particular class of data. Hence, we encourage the
reader to implement both the methods and use the one which performs the best. Finally,
although the above discussion is relevant only for the L 2 norm, an equivalent analysis can
be easily performed for any other norm.
6 Benchmarks
We have performed an extensive set of benchmarks on the proposed algorithm. We looked
at two representative classes of search problems that may benefit from the algorithm. In
the first class, the data has statistical structure. This is the case, for instance, when points
are uniformly or normally distributed. The second class of problems are statistically un-
structured, for instance, when points lie on a high dimensional multivariate manifold, and it
is difficult to say anything about their distribution. In this section, we will present results
for benchmarks performed on statistically structured data. For benchmarks on statistically
unstructured data, we refer the reader to section 7.
We tested two commonly occurring distributions, normal and uniform. The proposed algorithm
was compared with the k-d tree and exhaustive search algorithms. Other algorithms
were not included in this benchmark because they did not yield comparable performance.
For the first set of benchmarks, two normally distributed point sets containing 30,000 and
100,000 points with variance 1.0 were used. To test the per search execution time, another
set of points, which we shall call the test set, was constructed. The test set contained 10,000
points, also normally distributed with variance 1.0. For each algorithm, the execution time
was calculated by averaging over the total time required to perform a nearest neighbor search
on each of the 10,000 points in the test set. To determine ffl, we used the 'smallest hypercube'
method described in section 5.2. Since the point set is normally distributed, we cannot use a
closed form solution for ffl. However, it can be numerically computed as follows. Substituting
equation (16) into equation (23), we get
d
Y
oe
oe
p!!n
By setting p (the probability that there is at least one point in the hypercube) to .99 and
oe (the variance) to 1.0, we computed ffl for each search point Q using the fast and simple
bisection technqiue [
Figures
11(a) and 11(b) show the average execution time per search when the point set
contains 30,000 and 100,000 points respectively. These execution times include the time
taken for search, computation of ffl using equation (26), and the time taken for the few (1%)
additional 3 searches necessary when a point was not found within the hypercube. Although
3 When a point was not found within the hypercube, we incremented ffl by 0.1 and searched again. This
Time
(secs.)
Dimensions
Proposed algorithm
k-d tree
Exhaustive
Time
(secs.)
Dimensions
Proposed algorithm
k-d tree
Exhaustive search
(a) (b)0.050.150.250.35
Time
(secs.)
Dimensions
Proposed algorithm
k-d tree
Exhaustive
Time
(secs.)
Dimensions
Proposed algorithm
k-d tree
Exhaustive search
(c) (d)
Figure
11: The average execution time of the proposed algorithm is benchmarked for statistically
structured problems. (a) The point set is normally distributed with variance 1.0 and contains
30,000 points. (b) The point set is normally distributed with variance 1.0 and contains 100,000
points. The proposed algorithm is clearly faster in high d. (c) The point set is uniformly distributed
with extent 1.0 and contains 30,000 points. (d) The point set is uniformly distributed with extent
1.0 and contains 100,000 points. The proposed algorithm does not perform as well for uniform
distributions due to the extreme sparseness of the point set in high d.
ffl varies for each Q, values of ffl for a few sample points are as follows. For the
values of ffl at the point corresponding to
respectively. At the point the values of ffl were
0:24; 0:61; 0:86; 1:04; 1:17 corresponding to respectively. For
the values of ffl at the point corresponding
to respectively. At the point the values of ffl were
corresponding to
that the proposed algorithm is faster than the k-d tree algorithm for all d in Figure 11(a).
In
Figure
11(b), the proposed algorithm is faster for d ? 12. Also notice that the k-d
tree algorithm actually runs slower than exhaustive search for d ? 15. The reason for this
observation is as follows. In high dimensions, the space is so sparsely populated that the
radius of the query hypersphere is very large. Consequently, the hypersphere intersects
almost all the buckets and thus a large number of points are examined. This, along with the
additional overhead of traversing the tree structure makes it very inefficient to search the
sparse high dimensional space.
For the second set of benchmarks, we used uniformly distributed point sets containing
30,000 and 100,000 points with extent 1.0. The test set contained 10,000 points, also
uniformly distributed with extent 1.0. The execution time per search was calculated by
averaging over the total time required to perform a closest point search on each of the 10,000
points in the test set. As before, to determine ffl, the 'smallest hypercube' method described
in section 5.2 was used. Recall that for uniformly distributed point sets, ffl can be computed
in the closed form using equation (25). Figures 11(c) and 11(d) show execution times
when the point set contains 30,000 and 100,000 points respectively. For the
values of ffl were corresponding to
tively. For the values of ffl were corresponding to
respectively. For uniform distribution, the proposed algorithm does not
perform as well, although, it does appear to be slightly faster than the k-d tree and exhaustive
search algorithms. The reason is that the high dimensional space is very sparsely populated
and hence requires ffl to be quite large. As a result, the algorithm ends up examining almost
all points, thereby approaching exhaustive search.
process was repeated till a point was found.
7 An Example Application: Appearance Matching
We now demonstrate two applications where a fast and efficient high dimensional search
technique is desirable. The first, real time object recognition, requires the closest point to be
found among 36,000 points in a 35-D space. In the second, the closest point is required to be
found from points lying on a multivariate high dimensional manifold. Both these problems
are examples of statistically unstructured data.
Let us briefly review the object recognition technique of Murase and Nayar [ 24 ] . Object
recognition is performed in two phases: 1) appearance learning phase, and 2) appearance
recognition phase. In the learning phase, images of each of the hundred objects in all poses
are captured. These images are used to compute a high dimensional subspace, called the
eigenspace. The images are projected to eigenspace to obtain discrete high dimensional
points. A smooth curve is then interpolated through points that belong to the same object.
In this way, for each object, we get a curve (or a univariate manifold) parameterized by it's
pose. Once we have the manifolds, the second phase, object recognition, is easy. An image
of an object is projected to eigenspace to obtain a single point. The manifold closest to this
point identifies the object. The closest point on the manifold identifies the pose. Note that
the manifold is continuous, so in order to find the closest point on the manifold, we need to
finely sample it to obtain discrete closely spaced points.
For our benchmark, we used the Columbia Object Image Library [ along with the
SLAM software package [ 28 ] to compute 100 univariate manifolds in a 35-D eigenspace.
These manifolds correspond to appearance models of the 100 objects (20 of the 100 objects
shown in Figure 12(a)). Each of the 100 manifolds were sampled at 360 equally spaced
points to obtain 36,000 discrete points in 35-D space. It was impossible to manually capture
the large number of object images that would be needed for a large test set. Hence, we
automatically generated a test set of 100,000 points by sampling the manifolds at random
locations. This is roughly equivalent to capturing actual images, but, without image sensor
noise, lens blurring, and perspective projection effects. It is important to simulate these
effects because they cause the projected point to shift away from the manifold and hence,
substantially affect the performance of nearest neighbor search algorithms 4 .
4 For instance, in the k-d tree, a large query hypersphere would result in a large increase in the number
of adjacent buckets that may have to be searched.
Algorithm Time (secs.)
Proposed Algorithm .0025
k-d tree .0045
Exhaustive Search .1533
Projection Search .2924
(b)
Figure
12: The proposed algorithm was used to recognize and estimate pose of hundred objects
using the Columbia Object Image Library. (a) Twenty of the hundred objects are shown. The point
set consisted of 36,000 points (360 for each object) in 35-D eigenspace. (b) The average execution
time per search is compared with other algorithms.
Unfortunately, it is very difficult to relate image noise, perspective projection and other
distortion effects to the location of points in eigenspace. Hence, we used a simple model
where we add uniformly distributed noise with extent 5 .01 to each of the coordinates of
points in the test set. We found that this approximates real-world data. We determined
that setting gave us good recognition accuracy. Figure 12(b) shows the time taken
per search by the different algorithms. The search time was calculated by averaging the
total time taken to perform 100,000 closest point searches using points in the test set. It
can be seen that the proposed algorithm outperforms all the other techniques. ffl was set to
a predetermined value such that a point was found within the hypersphere all the time. For
object recognition, it is useful to search for the closest point within ffl because this provides
us with a means to reject points that are "far" from the manifold (most likely from objects
not in the database).
Next, we examine another case when data is statistically unstructured. Here, the closest
point is required to be found from points lying on a single smooth multivariate high dimensional
manifold. Such a manifold appears frequently in appearance matching problems such
as visual tracking [ 26 ] , visual inspection [ 26 ] , and parametric feature detection [ 25 ] . As with
object recognition, the manifold is a representation of visual appearance. Given a novel
appearance (point), matching involves finding a point on the manifold closest to that point.
Given that the manifold is continuous, to pose appearance matching as a nearest neighbor
problem, as before, we sample the manifold densely to obtain discrete closely spaced points.
The trivariate manifold we used in our benchmarks was obtained from a visual tracking
experiment conducted by Nayar et al. [ 26 ] . In the first benchmark, the manifold was sampled
to obtain 31,752 discrete points. In the second benchmark, it was sampled to obtain 107,163
points. In both cases, a test set of 10,000 randomly sampled manifold points was used. As
explained previously, noise (with extent .01) was added to each coordinate in the test set.
The execution time per search was averaged over this test set of 10,000 points. For this point
set, it was determined that gave good recognition accuracy. Figure 13(a) shows the
algorithm to be more than two orders of magnitude faster than the other algorithms. Notice
the exponential behaviour of the R-tree algorithm. Also notice that Yunck's algorithm is
5 The extent of the eigenspace is from -1.0 to +1.0. The maximum noise amplitude is hence about 0.5%
of the extent of eigenspace.
Time
(secs.)
Dimensions
Proposed algorithm
R-tree
Exhaustive search
Projection search
Yunck search
(a)0.0050.0150.0250.035
Time
(secs.)
Dimensions
Proposed algorithm
k-d tree0.0050.0150.0250.035
Time
(secs.)
Dimensions
Proposed algorithm
k-d tree
(b) (c)
Figure
13: The average execution time of the proposed algorithm is benchmarked for an unstructured
problem. The point set is constructed by sampling a high dimensional trivariate manifold.
(a) The manifold is sampled to obtain 31,752 points. The proposed algorithm is more than two
orders of magnitude faster than the other algorithms. (b) The manifold is sampled as before to
obtain 31,752 points. (c) The manifold is sampled to obtain 107,163 points. The k-d tree algorithm
is slightly faster in low dimension but degrades rapidly with increase in dimension.
only slightly faster than Friedman's; the difference is due to use of integer operations. We
could only benchmark Yunck's algorithm till due to use of a 32-bit word in the
indicator array. In Figure 13(b), it can be seen that the proposed algorithm is faster than
the k-d tree for all d, while in Figure 13(c), the proposed algorithm is faster for all d ? 21.
8 Hardware Architecture
A major advantage of our algorithm is its simplicity. Recall that the main computations
performed by the algorithm are simple integer map lookups (backward and forward maps)
and two integer comparisons (to see if a point lies within hyperplane boundaries). Conse-
quently, it is possible to implement the algorithm in hardware using off-the-shelf, inexpensive
components. This is hard to envision in the case of any competitive techniques such as k-d
trees or R-trees, given the difficulties involved in constructing parallel stack machines.
The proposed architecture is shown in Figure 14. A Field Programmable Gate Array
acts as an algorithm state machine controller and performs I/O with the CPU. The
Dynamic RAMs (DRAMs) hold the forward and backward maps which are downloaded from
the CPU during initialization. The CPU initiates a search by performing a binary search
to obtain the hyperplane boundaries. These are then passed on to the search engine and
held in the Static RAMs (SRAMs). The FPGA then independently begins the candidate list
construction and trimming. A candidate is looked up in the backward map and each of the
forward maps. The integer comparator returns a true if the candidate is within range, else it
is discarded. After trimming all the candidate points by going through the dimensions, the
final point list (in the form of point set indices) is returned to the CPU for exhaustive search
and/or further processing. Note that although we have described an architecture with a
single comparator, any number of them can be added and run in parallel with a near linear
performance scaling in the number of comparators. While the search engine is trimming the
candidate list, the CPU is of course free to carry out other tasks in parallel.
We have begun implementation of the proposed architecture. The result is intended to
be a small low-cost SCSI based module that can be plugged in to any standard workstation
or PC. We estimate the module to result in a 100 fold speedup over an optimized software
implementation.
ALGORITHM
CONTROL
UNIT
BACKWARD
MAP
FORWARD
MAP
A
A
CPU
COMPARATOR
WITHIN LIMIT FLAG
LOWER LIMIT
UPPER LIMIT
CONTROL BUS
FPGA/PLD
Figure
14: Architecture for an inexpensive hardware search engine that is based on the proposed
algorithm.
9 Discussion
9.1 k Nearest Neighbor Search
In section 5, we saw that it is possible to determine the minimum value of ffl necessary
to ensure that at least one point is found within a hypercube or hypersphere with high
probability. It is possible to extend this notion to ensure that at least k points are found
with high certainty. Recall that the probability that there exists at least one point in a
hypersphere of radius ffl is given by equation (19). Now define p k to be the probability that
there are at least k points within the hypersphere. We can then write p k as
The above expression can now be substituted in equation (18) and, given p k , numerically
solved for ffl hs . Similarly, it can be substituted in equation (22) to compute the minimum
value of ffl hc for a hypercube.
9.2 Dynamic Point Insertion and Deletion
Currently, the algorithm uses d floating point arrays to store the ordered set, and 2d integer
arrays to store the backward and forward maps. As a result, it is not possible to efficiently
insert or delete points in the search space. This limitation can be easily overcome if the
ordered set is not stored as an array but as a set of d binary search trees (BST) (each
BST corresponds to an array of the ordered set). Similarly, the d forward maps have to be
replaced with a single linked list. The backward maps can be done away with completely
as the indices can be made to reside within a node of the BST. Although BSTs would allow
efficient insertion and deletion, nearest neighbor searches would no longer be as efficient as
with integer arrays. Also, in order to get maximum efficiency, the BSTs would have to be
well balanced (see [ 19 ] for a discussion on balancing techniques).
9.3 Searching with Partial Data
Many times, it is required to search for the nearest neighbor in the absence of complete
data. For instance, consider an application which requires features to be extracted from an
image and then matched against other features in a feature space. Now, if it is not possible
to extract all features, then the matching has to be done partially. It is trivial to adapt
our algorithm to such a situation: while trimming the list, you need to only look at the
dimensions for which you have data. This is hard to envision in the case of k-d trees for
example, because the space has been partitioned by hyperplanes in particular dimensions.
So, when traversing the tree to locate the bucket that contains the query point, it is not
possible to choose a traversal direction at a node if data corresponding to the partitioning
dimension at that node is missing from the query point.
Acknowledgements
We wish to thank Simon Baker and Dinkar Bhat for their detailed comments, criticisms and
suggestions which have helped greatly in improving the paper.
This research was conducted at the Center for Research on Intelligent Systems at the
Department of Computer Science, Columbia University. It was supported in parts by ARPA
Contract DACA-76-92-C-007, DOD/ONR MURI Grant N00014-95-1-0601, and a NSF National
Young Investigator Award.
--R
The Design and Analysis of Computer Algorithms.
Nearest neighbor searching and applications.
diagrams - a survey of a fundamental geometric data struc- ture
Multidimensional binary search trees used for associative searching.
Multidimensional binary search trees in database applications.
Data structures for range searching.
Optimal expected-time algorithms for closest point problems
Multidimensional indexing for recognizing visual shapes.
A randomized algorithm for closest-point queries
Multidimensional searching problems.
Algorithms in Combinatorial Geometry.
Fast nearest-neighbor search in dissimilarity spaces
An algorithm for finding nearest neigh- bors
An algorithm for finding best matches in logarithmic expected time.
A branch and bound algorithm for computing k-nearest neighbors
An effective way to represent quadtrees.
A dynamic index structure for spatial searching.
Fundamentals of Data Structures in.
On the complexity of d-dimensional voronoi diagrams
Sorting and Searching
A new version of the nearest-neighbor approximating and eliminating search algorithm (aesa) with linear preprocessing time and memory requirements
Visual learning and recognition of 3d objects from appear- ance
Parametric feature detection.
Slam: A software library for appearance matching.
Digital Pictures
Similarity searching in large image databases.
Computational Geometry: An Introduction.
Numerical Recipes in C.
Direct spatial search on pictorial databases using packed r-trees
Refinements to nearest-neighbor searching in k-dimensional trees
Reducing the overhead of the aesa metric-space nearest neighbour searching algorithm
Data structures and algorithms for nearest neighbor search in general metric spaces.
A technique to identify nearest neighbors.
--TR
--CTR
James McNames, A Fast Nearest-Neighbor Algorithm Based on a Principal Axis Search Tree, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.9, p.964-976, September 2001
David W. Patterson , Mykola Galushka , Niall Rooney, Characterisation of a Novel Indexing Technique for Case-Based Reasoning, Artificial Intelligence Review, v.23 n.4, p.359-393, June 2005
Philip Quick , David Capson, Subspace position measurement in the presence of occlusion, Pattern Recognition Letters, v.23 n.14, p.1721-1733, December 2002
T. Freeman , Thouis R. Jones , Egon C Pasztor, Example-Based Super-Resolution, IEEE Computer Graphics and Applications, v.22 n.2, p.56-65, March 2002
Bin Zhang , Sargur N. Srihari, Fast k-Nearest Neighbor Classification Using Cluster-Based Trees, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.4, p.525-528, April 2004
Jim Z. C. Lai , Yi-Ching Liaw , Julie Liu, Fast k-nearest-neighbor search based on projection and triangular inequality, Pattern Recognition, v.40 n.2, p.351-359, February, 2007
Lin Liang , Ce Liu , Ying-Qing Xu , Baining Guo , Heung-Yeung Shum, Real-time texture synthesis by patch-based sampling, ACM Transactions on Graphics (TOG), v.20 n.3, p.127-150, July 2001
Fast texture synthesis using tree-structured vector quantization, Proceedings of the 27th annual conference on Computer graphics and interactive techniques, p.479-488, July 2000
Yong-Sheng Chen , Yi-Ping Hung , Ting-Fang Yen , Chiou-Shann Fuh, Fast and versatile algorithm for nearest neighbor search based on a lower bound tree, Pattern Recognition, v.40 n.2, p.360-375, February, 2007
Mineichi Kudo , Naoto Masuyama , Jun Toyama , Masaru Shimbo, Simple termination conditions for k-nearest neighbor method, Pattern Recognition Letters, v.24 n.9-10, p.1203-1213, 01 June
John T. Favata, Offline General Handwritten Word Recognition Using an Approximate BEAM Matching Algorithm, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.9, p.1009-1021, September 2001
Edgar Chvez , Gonzalo Navarro, A compact space decomposition for effective metric indexing, Pattern Recognition Letters, v.26 n.9, p.1363-1376, 1 July 2005
Aaron Hertzmann , Steven M. Seitz, Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1254-1264, August 2005
O. Stahlhut, Extending natural textures with multi-scale synthesis, Graphical Models, v.67 n.6, p.496-517, November 2005
Gonzalo Navarro, Searching in metric spaces by spatial approximation, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.1, p.28-46, August 2002
Edgar Chvez , Jos L. Marroqun , Gonzalo Navarro, Fixed Queries Array: A Fast and Economical Data Structure for Proximity Searching, Multimedia Tools and Applications, v.14 n.2, p.113-135, June 2001
Jin-xiang Chai , Jing Xiao , Jessica Hodgins, Vision-based control of 3D facial animation, Proceedings of the ACM SIGGRAPH/Eurographics symposium on Computer animation, July 26-27, 2003, San Diego, California
Edgar Chvez , Gonzalo Navarro , Ricardo Baeza-Yates , Jos Luis Marroqun, Searching in metric spaces, ACM Computing Surveys (CSUR), v.33 n.3, p.273-321, September 2001
Huzefa Neemuchwala , Alfred Hero , Paul Carson, Image matching using alpha-entropy measures and entropic graphs, Signal Processing, v.85 n.2, p.277-296, February 2005
Gisli R. Hjaltason , Hanan Samet, Index-driven similarity search in metric spaces, ACM Transactions on Database Systems (TODS), v.28 n.4, p.517-580, December
Richard Szeliski, Image alignment and stitching: a tutorial, Foundations and Trends in Computer Graphics and Vision, v.2 n.1, p.1-104, January 2006 | object recognition;benchmarks;nearest neighbor;searching by slicing;visual correspondence;pattern classification;hardware architecture |
265213 | Design and Evaluation of a Window-Consistent Replication Service. | AbstractReal-time applications typically operate under strict timing and dependability constraints. Although traditional data replication protocols provide fault tolerance, real-time guarantees require bounded overhead for managing this redundancy. This paper presents the design and evaluation of a window-consistent primary-backup replication service that provides timely availability of the repository by relaxing the consistency of the replicated data. The service guarantees controlled inconsistency by scheduling update transmissions from the primary to the backup(s); this ensures that client applications interact with a window-consistent repository when a backup must supplant a failed primary. Experiments on our prototype implementation, on a network of Intel-based PCs running RT-Mach, show that the service handles a range of client loads while maintaining bounds on temporal inconsistency. | Introduction
Many embedded real-time applications, such as automated manufacturing and process control, require
timely access to a fault-tolerant data repository. Fault-tolerant systems typically employ some form of
redundancy to insulate applications from failures. Time redundancy protects applications by repeating
computation or communication operations, while space redundancy masks failures by replicating
physical resources. The time-space tradeoffs employed in most systems may prove inappropriate for
achieving fault tolerance in a real-time environment. In particular, when time is scarce and the overhead
for managing redundancy is too high, alternative approaches must balance the trade-off between
timing predictability and fault tolerance.
For example, consider the process-control system shown in Figure 1(a). A digital controller supports
monitoring, control, and actuation of the plant (external world). The controller software executes a
The work reported in this paper was supported in part by the National Science Foundation under Grant MIP-9203895.
Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not
necessarily reflect the view of the NSF.
Controller
Plant
(External World)
sensors actuators
repository
in-memory
Controller
Backup
Primary
Controller
sensors actuators
Plant
(External World)
replicated in-memory
repository
(a) Digital controller interacting with a plant (b) Primary-backup control system
Figure
1: Computer control system
tight loop, sampling sensors, calculating new values, and sending signals to external devices under
its control. It also maintains an in-memory data repository which is updated frequently during each
iteration of the control loop. The data repository must be replicated on a backup controller to meet the
strict timing constraint on system recovery when the primary controller fails, as shown in Figure 1(b).
In the event of a primary failure, the system must switch to the backup node within a few hundred
milliseconds. Since there can be hundreds of updates to the data repository during each iteration of
the control loop, it is impractical (and perhaps impossible) to update the backup synchronously each
time the primary repository changes.
An alternative solution exploits the data semantics in a process-control system by allowing the
backup to maintain a less current copy of the data that resides on the primary. The application may
have distinct tolerances for the staleness of different data objects. With sufficiently recent data, the
backup can safely supplant a failed primary; the backup can then reconstruct a consistent system state
by extrapolating from previous values and new sensor readings. However, the system must ensure that
the distance between the primary and the backup data is bounded within a predefined time window.
Data objects may have distinct tolerances in how far the backup can lag behind before the object state
becomes stale. The challenge is to bound the distance between the primary and the backup such that
consistency is not compromised, while minimizing the overhead in exchanging messages between the
primary and its backup.
This paper presents the design and implementation of a data replication service that combines
fault-tolerant protocols, real-time scheduling, and temporal consistency semantics to accommodate
such system requirements [24, 29]. A client application registers a data object with the service by
declaring the consistency requirements for the data, in terms of a time window. The primary selectively
transmits to the backup, as opposed to sending an update every time an object changes, to bound
both resource utilization and data inconsistency. The primary ensures that each backup site maintains
a version of the object that was valid on the primary within the preceding time window by scheduling
these update messages.
The next section discusses related work on fault-tolerant protocols and relaxed consistency se-
mantics, with an emphasis on supporting real-time applications. Section 3 describes the proposed
window-consistent primary-backup architecture and replication protocols for maintaining controlled
inconsistency within the service. This replication model introduces a number of interesting issues in
scheduling, fault detection, and system recovery. Section 4 considers real-time scheduling algorithms
for creating and maintaining a window-consistent backup, while Section 5 presents techniques for fault
detection and recovery for primary, backup, and communication failures. In Section 6, we present and
evaluate an implementation of the window-consistent replication service on a network of Intel-based
PCs running RT-Mach [32]. Section 7 concludes the paper by highlighting the limitations of this work
and discussing future research directions.
Related Work
2.1 Replication Models
A common approach to building fault-tolerant distributed systems is to replicate servers that fail
independently. In active (state-machine) replication schemes [6, 30], a collection of identical servers
maintains copies of the system state. Client write operations are applied atomically to all of the
replicas so that after detecting a server failure, the remaining servers can continue the service. Passive
(primary-backup) replication [2, 9], on the other hand, distinguishes one replica as the primary server,
which handles all client requests. A write operation at the primary invokes the transmission of an
update message to the backup servers. If the primary fails, a failover occurs and one of the backups
becomes the new primary.
In recent years, several fault-tolerant distributed systems have employed state-machine [7, 11, 26]
or primary-backup [4, 5, 9] replication. In general, passive replication schemes have longer recovery
delays since a backup must invoke an explicit recovery algorithm to replace a failed primary. On the
other hand, active replication typically incurs more overhead in responding to client requests since
the service must execute an agreement protocol to ensure atomic ordered delivery of messages to all
replicas. In both replication models, each client write operation generates communication within the
service to maintain agreement amongst the replicas. This artificially ties the rate of write operations
to the communication capacity in the service, limiting system throughput while ensuring consistent
data.
Past work on server replication has focused, in most cases, on improving throughput and latency for
client requests. For example, Figure 2(a) shows the basic primary-backup model, where a client write
operation at the primary P triggers a synchronous update to the backup B [4]. The service can improve
C2(a) Blocking (b) Efficient blocking (c) Non-blocking
Figure
2: Primary-backup models
response time by allowing the backup B to acknowledge the client C [2], as shown in Figure 2(b).
Finally, the primary can further reduce write latency by replying to C immediately after sending an
update message to B, without waiting for an acknowledgement [8], as shown in Figure 2(c). Similar
performance optimizations apply to the state-machine replication model. Although these techniques
significantly improve average performance, they do not guarantee bounded worst-case delay, since
they do not limit communication within the service.
Synchronization of redundant servers poses additional challenges in real-time environments, where
applications operate under strict timing and dependability constraints; server replication for hard real-time
systems is under investigation in several recent experimental projects [15, 16, 33]. Synchronization
overheads, communication delay, and interaction with the external environment complicate the design
of replication protocols for real-time applications. These overheads must be quantified precisely for
the system to satisfy real-time constraints.
2.2 Consistency Semantics
A replication service can bound these overheads by relaxing the data consistency requirements in
the repository. For a large class of real-time applications, the system can recover from a server
failure even though the servers may not have maintained identical copies of the replicated state. This
facilitates alternative approaches that trade atomic or causal consistency amongst the replicas for less
expensive replication protocols. Enforcing a weaker correctness criterion has been studied extensively
for different purposes and application areas. In particular, a number of researchers have observed that
serializability is too strict a correctness criterion for real-time databases. Relaxed correctness criteria
higher concurrency by permitting a limited amount of inconsistency in how a transaction
views the database state [12, 17, 18, 20, 28].
Similarly, imprecise computation guarantees timely completion of an application by relaxing the
accuracy requirements of the computation [22]. This is particularly useful in applications that use
discrete samples of continuous-time variables, since these values can be approximated when there is
not sufficient time to compute an exact value. Weak consistency can also improve performance in
non-real-time applications. For instance, the quasi-copy model permits some inconsistency between
the central data and its cached copies at remote sites [1]. This gives the scheduler more flexibility in
propagating updates to the cached copies. In the same spirit, window-consistent replication allows
computations that may otherwise be disallowed by existing active or passive protocols that require
atomic updates to a collection of replicas.
3 Window-Consistent Replication
The window-consistent replication service consists of a primary and one or more backups, with the
data on the primary shadowed at each backup site. These servers store objects which change over time,
in response to client interaction with the primary. In the absence of failures, the primary satisfies all
client requests and supplies a data-consistent repository. However, if the primary crashes, a window-
consistent backup performs a failover to become the new primary. Hence, service availability hinges
on the existence of a window-consistent backup to supplant a failed primary.
3.1 System Model
Unlike the primary-backup protocols in Figure 2, the window-consistent replication model decouples
client read and write operations from communication within the service. As shown in Figure 3, the
primary object manager (OM) handles client data requests, while sending messages to the backups at
the behest of the update scheduler (US). Since read and write operations do not trigger transmissions
to the backup sites, client response time depends only on local operations at the primary. This allows
the primary to handle a high rate of client requests while independently sending update messages to
the backup sites.
Although these update transmissions must accommodate the temporal consistency requirements
of the objects, the primary cannot compromise the client application's processing demands. Hence,
the primary must match the update rate with the available processing and network bandwidth by
selectively transmitting messages to the backups. The primary executes an admission control algorithm
as part of object creation, to ensure that the US can schedule sufficient update transmissions for any
new objects. Unlike client reads and writes, object creation and deletion requires complete agreement
between the primary and all the backups in the replication service.
3.2 Consistency Semantics
The primary US schedules transmissions to the backups to ensure that each replica has a sufficiently
recent version of each object. Timestamps - P
versions of object O i at
the primary and backup sites, respectively. At time t the primary P has a copy of O i written by the
client application at time - P
while a backup B stores a, possibly older, version originally written
on P at time - B
may have an older version of O i than P , the copy on B must be "recent
Scheduler
Update
Object
Manager
Scheduler
Update
Object
Manager
Primary Backup
update
ack
ack
create/delete
create/delete
read/write
Figure
3: Window-consistent primary-backup architecture
enough." If O i has window ffi i , a window-consistent backup must believe in data that was valid on P
within the last
Definition 1: At time t, a backup copy of object O i has window-inconsistency
i is the
maximum time such that t 0
(t). Object O i is window-consistent if and only if
window-consistent if and only if all of its objects are window-consistent.
In other words, B has a window-consistent copy of object O i at time t if and only if
For example, in Figure 4, P performs several write operations on O i , on behalf of client requests,
but selectively transmits update messages to B. At time t the primary has the most recent version
of the object, written by the client at time d. The backup has a copy first recorded on the primary
at time b; the primary stopped believing this version at time c. Thus, - P
d, B has a window-consistent version of O i at time t. The backup
object has inconsistency which is less than its window-consistency requirement ffi i . A small value
of allows the client to operate with a more recent copy of the object if the backup must supplant
a failed primary.
The
i represents an object's temporal inconsistency within the replication service, as
seen by an "omniscient" observer. Since the backup site does not always have up-to-date knowledge
of client operations, the backup has a more conversative view of temporal consistency, as discussed
in Section 5.2. The client may also require bounds on the staleness of the backup's object, relative
to the primary's copy, to construct a valid system state when a failover occurs. In particular, if the
client reads O i at time t on P , it receives the version that it wrote
units ago. On the
other hand, if B supplants a failed primary, the client would read the version that it wrote
time units ago. This version is - P
than that on the primary; in Figure 4, this "client
view" has inconsistency d \Gamma b.
a d
d
Btbackup view
omniscient view
client view
client
update
message
Figure
4: Window-consistency semantics
Definition 2: At time t, object O i has recovery inconsistency - P
Two components contribute to this recovery inconsistency: client write patterns and the temporal
inconsistency within the service. Window-consistent replication bounds the latter, allowing the client
to bound recovery inconsistency based on its access patterns. For example, suppose consecutive client
writes occur at most w i time units apart; typically, w i is smaller than since the primary sends only
selective updates to the backup sites. The window-consistency bound t \Gamma t 0
ensures that
the backup's copy of the object was written on the primary no earlier than time
t, window consistency guarantees that - P
4 Real-Time Update Scheduling
This section describes how the primary can use existing real-time task scheduling algorithms to coordinate
update transmissions to the backups. In the absence of link (performance or crash) failures [10],
we assume a bound ' on the end-to-end communication latency within the service. For example, a
real-time channel [14, 23] with the desired bound could be established between the primary and the
backups. Several other approaches to providing bounds on communication latency are discussed in [3].
If a client operation modifies O i , the primary must send an update for the object within the next
otherwise, the backups may not receive a sufficiently recent version of O i before the
time-window elapses. In order to bound the temporal inconsistency within the service, it suffices
that the primary send O i to the backups at least once every units. While bounding the
temporal inconsistency, the primary may send additional updates to the backups if sufficient processing
and network capacity are available; these extra transmissions increase the service's resilience to lost
update messages and the average "goodness" of the replicated data.
In addition to sending update transmissions to the backups, the primary must allow efficient
integration of new backups into the replication service. Limited processing and network capacity
necessitate a trade-off between timely integration of a new backup and keeping existing backups
window-consistent. The primary should minimize the time to integrate a new replica, especially when
there are no other window-consistent backups, since a subsequent primary crash would result in a
server failure. The primary constructs a schedule that sends each object to the backup exactly once,
and allows the primary to smoothly transition to the update transmission schedule. While several task
models can accommodate the requirements of window-consistent scheduling and backup integration,
we initially consider the periodic task model [19, 21].
4.1 Periodic Scheduling of Updates
The transmissions of updates can be cast as "tasks" that run periodically with deadlines derived from
the objects' window-consistency requirements. The primary coordinates transmissions to the backups
by scheduling an update "task" with period p i and service time e i for each object O
consistency, this permits a maximum period ')=2. The end of a period serves as both
the deadline for one invocation of the task and the arrival time for the subsequent invocation. The
scheduler always runs the ready task with the highest priority, preempting execution if a higher-priority
task arrives. For example, rate-monotonic scheduling statically assigns higher priority to tasks with
shorter periods [19, 21], while earliest-due-date scheduling favors tasks with earlier deadlines [21].
The scheduling algorithm, coupled with the object parameters e i and ffi i , determines a schedulability
criterion based on the total processor and network utilization. The schedulability criterion governs
object admission into the replication service. The primary rejects an object registration request
(specifying e i and cannot schedule sufficient updates for the new object without jeopardizing
the window consistency of existing objects, i.e., it does not have sufficient processing and network
resources to accommodate the object's window-consistency requirements. The scheduling algorithm
maintains window consistency for all objects as long as the the collection of tasks does not exceed a
certain bound on resource utilization (e.g., 0:69 for rate-monotonic and 1 for earliest-due-date) [21].
4.2 Compressing the Periodic Schedule
While the periodic model can guarantee sufficient updates for each object, the schedule updates O i only
once per period p i , even if computation and network resources permit more frequent transmissions.
This restriction arises because the periodic model assumes that a task becomes ready to run only at
period boundaries. However, the primary can transmit the current version of an object at any time.
The scheduler can capitalize on this "readiness" of tasks to improve both resource utilization and the
window consistency on the backups by compressing the periodic schedule.
Consider two objects O 1
(with
1), as shown in
Figure
5; the unshaded boxes denote transmission of O 1
, while the shaded boxes signify transmission
of O 2
. The scheduler must send an update requiring 1 unit of processing time once every 3 time units
1 The size of O i
determines the time e i
required for each update transmission. In order to accommodate preemptive
scheduling and objects of various sizes, the primary can send an update message as one or more fixed-length packets.
(a) Periodic schedule
(b) Compressed periodic schedule
Figure
5: Compression (p 1
(unshaded box) and an update requiring 2 units of processing time once every 5 time units (shaded
box). The schedule repeats after each major cycle of length 15. Each time unit corresponds to a tick
which is the granularity of resource allocation for processing and transmission of a packet. For this
example, both the rate-monotonic and earliest-due-date algorithms generate the schedule shown in
Figure
5(a).
While each update is sent as required in the major cycle of length 15, the schedule has 4 units of
slack time. The replication service can capitalize on this slack time to improve the average temporal
consistency of the backup objects. In particular, the periodic schedule in Figure 5(a) can provide the
order of task executions without restricting the time the tasks become active. If no tasks are ready to
run, the scheduler can advance to the earliest pending task and activate that task by advancing the
logical time to the start of the next period for that object. With the compressed schedule the primary
still transmits an update for each O i at least once per period p i but can send more frequent update
messages when time allows. As shown in Figure 5(b), compressing the slack time allows the schedule
to start over at time 11. In the worst case, the compressed schedule degrades to the periodic schedule
with the associated guarantees.
4.3 Integrating a New Backup
To minimize the time the service operates without a window-consistent backup, the primary P needs
an efficient mechanism to integrate a new or invalid backup B. P must send the new backup B a copy
of each object and then transition to the normal periodic schedule, as shown in Figure 6. Although B
may not have window-consistent objects during the execution of the integration schedule, each object
must become consistent and remain consistent until its first update in the normal periodic schedule.
As a result, B must receive a copy of O i within the "period" p i before the periodic schedule
begins; this ensures that B can afford to wait until the next p i interval to start receiving periodic
update messages for O i . In order to integrate the new backup, then, the primary must execute an
integration schedule that would allow it to transition to the periodic schedule while maintaining window
consistency. Referring to Figure 6, a window-consistent transition requires D prior
post
prior
j is the time elapsed from the last transmission of O j to the end of the integration schedule,
while D post
j is the time from the start of the periodic schedule until the first transmission of O j . This
O
O i
O i
O j
O k
prior post
transition
integration schedule periodic schedule
Figure
Integrating a new backup repository
ensures window consistency for each object, even across the schedule transition. Since the periodic
task model provides D post
suffices to ensure that D post
A simple schedule for integration is to send objects to the new backup using the normal periodic
schedule already being used for update transmissions to the existing replicas. This incurs a worst-case
delay of 2 to integrate the new backup into the service. However, if the service has no
window-consistent backup sites, the primary should minimize the time required to integrate a new
replica. In particular, an efficient integration schedule should transmit each object exactly once before
transitioning to the normal periodic schedule.
The primary may adapt the normal periodic schedule into an efficient integration schedule by
removing duplicate object transmissions. In particular, the primary can transmit the objects in order
of their last update transmissions before the end of a major cycle in the normal schedule. For example,
for the schedule shown in Figure 5(a), the integration schedule is [O
because the last transmission
for O 1
is at time 10 (12). A transition from the integration schedule to the
normal schedule sustains window consistency on the newly integrated backup since the normal schedule
guarantees window consistency across major cycles. Since the integration schedule is derived from the
periodic schedule, it follows that D prior
post
The normal schedule order can be determined when objects are created or during the first major
cycle of the normal schedule. Since the schedule transmits each object only once, the integration
delay is
is the number of registered objects. Although this approach is efficient for
static object sets, dynamic creation and deletion of objects introduces more complexity. Since the
transmission order in the normal schedule depends on the object set, the primary must recompute the
integration schedule whenever a new object enters the service. The cost of constructing an integration
schedule, especially for dynamic object sets, can be reduced by sending the objects to B in reverse
period order, such that the objects with larger periods are sent before those with smaller periods.
For object O j , this ensures that only objects with smaller or equivalent periods can follow O j in
the integration schedule; these same objects can precede O j in the periodic schedule. This guarantees
that the integration schedule transmits O j no more than p units before the start of the periodic
schedule, ensuring a window-consistent transition. For example, in Figure 6 p i - . In the
sends update to B B receives (i; O; -; s) P receives (i; -; s)
select object i if (s ? t xmit
last
last / s
last
last / s
send
Figure
7: Update protocols
periodic schedule objects O i with are transmitted at least once within time D post
but exactly
once within time D prior
it follows that D prior
post
After object creations or deletions, the
primary can construct the new integration schedule by sorting the new set of periods. The primary
minimizes the time it operates without a window-consistent backup by transmitting each object exactly
once before transitioning to the normal periodic schedule.
5 Fault Detection and Recovery
Although real-time scheduling of update messages can maintain window-consistent replicas, processor
and communication failures potentially disrupt system operation. We assume that servers may suffer
crash failures and the communication subsystem may suffer omission or performance failures; when
a site fails, the remaining replicas must recover in a timely manner to continue the data-repository
service. The primary attempts to minimize the time it operates without a window-consistent backup,
since a subsequent primary crash would cause a service failure. Similarly, the backup tries to detect a
primary crash and initiate failover before any backup objects become window-inconsistent. Although
the primary and backup cannot have complete knowledge of the global system state, the message
exchange between servers provides a measure of recent service activity.
5.1 Update Protocols
Figure
7 shows how the primary and backup sites exchange object data and estimate global system
state. We assume that the servers communicate only by exchanging messages. Since these messages
include temporal information, P and B cannot effectively reason about each other unless server clocks
are synchronized within a known maximum bound. A clock synchronization algorithm can use the
transmit times for the update and acknowledgement messages to bound clock skew in the service.
Using the update protocols, P and B each approximate global state by maintaining the most recent
information received from the other site.
Before transmitting an update message at time t, the primary records the version timestamp - xmit
for the selected object O i . Since - B
i , this information gives P an optimistic view of the
backup's window consistency. The primary's message to the backup contains the object data, along
with the version timestamp and the transmission time. B uses the transmission time to detect out-of-
order message arrivals by maintaining t xmit
i , the time of the most recent transmission of O i that has
been successfully received; the sites store monotonically non-decreasing version timestamps, without
requiring reliable or in-order message delivery in the service. Upon receiving a newer transmission
of O i , the backup updates the object's data, the version timestamp - B
discussed in
Section 5.2, the backup uses t xmit
i to reason about its own window consistency.
To diagnose a crashed primary, B also maintains t last , the transmission time of the last message
received from P regarding any object; that is, t last
g. Similarly, P tracks the transmission
times of B's messages to diagnose possible crash failures. Hence, the backup's acknowledgement
message to P includes the transmission time t, as well as - B
i , the most recent version timestamp for
O i on B. Using this information, the primary determines - ack
i , the most recent version of O i that B
has successfully acknowledged. Since - B
i , this variable gives P a pessimistic measure of the
backup's window consistency; as discussed in Section 5.3, the primary uses - ack
i to select
policies for scheduling update transmissions to the backup.
5.2 Backup Recovery From Primary Failures
A backup site must estimate its own window consistency and the status of the primary to successfully
supplant a crashed primary. While B may be unaware of recent client interaction with P for
each object, B does know the time t xmit
i of object O i . Although P
may continue to believe version - xmit
even after transmitting the update message, B conservatively
estimates that the client wrote a new version of O i just after P transmitted the object at time t xmit
In particular,
Definition 3: At time t, the backup copy of object O i has estimated inconsistency
the
backup knows that O i is window-consistent if
Figure
4 shows an example of this "backup view" of window consistency.
Using this consistency metric, the backup must balance the possibility of becoming window-
inconsistent with the likehood of falsely diagnosing a primary crash. If B believes that all of its
objects are still window-consistent, B need not trigger a failover until further delay would endanger
the consistency of a backup object; in particular, the backup conservatively estimates that its copy of
O i could become window-inconsistent by time t xmit
in the absence of further update messages
from P . However, to reduce the likelihood of false failure detection, failover should only occur if B
has not received any messages from P for some minimum time fi.
In this adaptive failure detection mechanism, B diagnoses a primary crash at time
if and only if t crash - t last After failover, the new primary site invokes the client application
and begins interacting with the external environment. For a period of time, the new P operates with
some partially inconsistent data but gradually constructs a consistent system state from these old
values and new sensor readings. The new P later integrates a fresh backup to enhance future service
availability.
diagnoses a primary crash through missed update messages, lost or delayed messages could
still trigger false failure detection, resulting in multiple active primary sites. When the system has
multiple backups, the replicas can vote to select a single, valid primary. However, when the service
has only two sites, communication failures can cause each site to assume the other has failed. In
this situation, a third-party "witness" [27] can select the primary site. This witness does not act as a
primary or backup server, but casts the deciding vote in failure diagnosis. In a real-time control system,
the actuator devices could implicitly serve as this witness; if a new server starts issuing commands to
the actuators, the devices could ignore subsequent instructions from the previous primary site.
5.3 Primary Recovery From Backup Failures
Service availability also depends on timely recovery from backup failures. Since the data-repository
service continues whenever a valid primary exists, the primary can temporarily tolerate backup crashes
or communication failures without endangering the client application. Ultimately, though, P should
minimize the portion of time it operates without a window-consistent backup, since a subsequent
primary crash would cause a service failure. The primary should diagnose possible backup crashes and
efficiently integrate new backup sites. If P believes that an operational backup has become window-
inconsistent, due to lost update messages or transient overload conditions, the primary should quickly
refresh the inconsistent objects.
As in Section 5.2, timeout mechanisms can detect possible server failures. The primary assumes
that the backup has crashed if P has not received any acknowledgement messages in the last ff time
units (i.e., last - ff). After detecting a backup crash, P can integrate a fresh backup site into
the system while continuing to satisfy client read and write requests. If the P mistakenly diagnoses a
backup crash, the system must operate with one less replica while the primary integrates a new backup
this new backup does not become window-consistent until the integration schedule completes, as
described in Section 4.3. However, if the backup has actually failed, a large timeout value increases
the failure diagnosis latency, which also increases the time the system operates without sufficient
backup sites. Hence, P must carefully select ff to maximize the backups' chance of recovering from a
subsequent primary failure.
Even if the backup site does not crash, delayed or lost update messages can compromise the
window consistency of backup objects, making B ineligible to replace a crashed primary. Using - ack
and - xmit
can estimate the consistency of backup objects and select the appropriate policy for
scheduling update transmissions. The primary may choose to reintegrate an inconsistent backup,
even when last ! ff, rather than wait for a later update message to restore the objects' window
Probability of message loss0.401.20Average
maximum
distance
w=300 msec (no compression)
w=700 msec (no compression)
w=300 msec (compression)
w=700 msec (compression)
Probability of message loss0.020.060.10
Probability
that
backup
is
inconsistent
w=300 msec (no compression)
w=700 msec (no compression)
w=300 msec (compression)
w=700 msec (compression)
(a) Average maximum distance (b) Probability(backup inconsistent)
Figure
8: Window consistency
: The graphs show the performance of the service as a function of
the client write rate, message loss, and schedule compression. Although object inconsistency increases
with message loss, compressing the periodic schedule reduces the effects of communication failures.
Inconsistency increases as the client writes more frequently, since the primary changes it object soon
after transmitting an update message to the backup.
consistency. Suppose the primary thinks that B's copy of O i is window-inconsistent. Under periodic
update scheduling, P may not send another update message for this object until some time 2p
later. If this object has a large window ffi i , the primary can reestablish the backup's window consistently
more quickly by executing the integration schedule, which requires time P
is the service
time for object O i , as described in Section 4.1.
Still, the primary cannot accurately determine if the backup object O i is inconsistent, since lost
or delayed acknowledgement messages can result in an overly pessimistic value for - ack
. The primary
should not be overly aggressive in diagnosing inconsistent backup objects, since reintegration temporarily
prohibits the backup from replacing a failed primary. Instead, P should ideally "retransmit"
the offending object, without violating the window consistency of the other objects in the service. For
example, P can schedule a special "retransmission" window for transmitting objects that have not
received acknowledgement messages for past updates; when this "retransmission object" is selected for
service, P transmits an update message for one of the existing objects, based on the values of - ack
i . This improves the likelihood of having window-consistent backup sites, even in the presence of
communication failures.
6 Implementation and Evaluation
6.1 Prototype Implementation
We have developed a prototype implementation of the window-consistent replication service to demonstrate
and evaluate the proposed service model. The implementation consists of a primary and a
backup server, with the client application running on the primary node as shown in Figure 3. The
primary implements rate-monotonic scheduling of update transmissions, with an option to enable
schedule compression. Tick scheduling allocates the processor for different activities, such as handling
client requests, sending update messages, and processing acknowledgements from the backup. At the
start of each tick, the primary transmits an update message to the backup for one of the objects, as
determined by the scheduling algorithm. Any client read/write requests and update acknowledgements
are processed next, with priority given to client requests.
Each server is currently an Intel-based PC running the Real-Time Mach [25, 32] operating system 2 .
The sites communicate over an Ethernet through UDP datagrams using the Socket++ library [31], with
extensions to the UNIX select call for priority-based access to the active sockets. At initialization,
sockets are registered at the appropriate priority such that the socket for receiving client requests
has a higher priority over that for receiving update acknowledgements from the backup. A tick
period of 100 ms was chosen to minimize the intrusion from other runnable system processes 3 . To
further minimize interference, experiments were conducted with lightly-loaded machines on the same
Ethernet segment; we did not observe any significant fluctuations in network or processor load during
the experiments.
The primary and backup sites maintain in-memory logs of events at run-time to efficiently collect
performance data with minimal intrusion. Estimates of the clock skew between the primary and the
backup, derived from actual measurements of round-trip latency, are used to adjust the occurrence
times of events to calculate the distance between objects on the primary and backup sites. The
prototype evaluation considers three main consistency metrics representing window consistency and
the backup and client views. These performability metrics are influenced by several parameters,
including client write rate, communication failures, and schedule compression.
The experiments vary the client write rate by changing the time w i between successive client writes
to an object. We inject communication failures by randomly dropping update messages; this captures
the effect of transient network load as well as lost update acknowledgements. The invariants in our
evaluation are the tick period (100 ms), the objects' window size (ffi and the number of
objects given the tick period and ffi i , N is determined by the schedulability criterion of the
rate-monotonic scheduling algorithm. All objects have the same update transmission time of one tick,
with the object size chosen such that the time to process and transmit the object is reasonably small
earlier experiments on Sun workstations running Solaris 1.1 show similar results [24].
3 The 100 ms tick period has the same granularity as the process scheduling quantum to limit the interference from
other jobs running on the machine. However, smaller tick periods are desirable in order to allow objects to specify tighter
windows (the window size is expressed in number of ticks) and respond to client requests in a timely manner.
compared to the tick size; the extra time within each tick period is used to process client requests and
update acknowledgements. Experiments ran for 45 minutes for each data point.
6.2 Omniscient View (Window Consistency)
The window-consistency metric captures the actual temporal inconsistency between the primary
and the backup sites, and serves as a reference point for the performance of the replication service.
Figure
8(a) shows the average maximum distance between the primary and the backup as a function
of the probability of message loss for three different client write periods, with and without schedule
compression. This measures the inconsistency of each backup object just before receiving an update,
averaged over all versions and all objects, reflecting the "goodness" of the replicated data. Figure 8(b)
shows the probability of an inconsistent backup as a function message loss; this "fault-tolerance" metric
measures the likelihood that the backup has one or more inconsistent objects. In these experiments,
the client writes each object once every tick (w once every 3 ticks (w once
every 7 ticks (w
The probability of message loss varies from 0% to 10%; experiments with higher message loss rates
reveal similar trends. Message loss increases the distance between the primary and the backup, as
well as the likelihood of an inconsistent backup. However, the influence of message loss is not as
pronounced due to conservative object admission in the current implementation. This occurs because,
on average, the periodic model schedules updates twice as often as necessary, in order to guarantee the
required worst-case spacing between update transmissions. Message loss should have more influence
in other scheduling models which permit higher resource utilization, as discussed in Section 7. Higher
client write rates also tend to increase the backup's inconsistency; as the client writes more frequently,
the primary's copy of the object changes soon after sending an update message, resulting in staler
data at the backup site.
Schedule compression is very effective in improving both performance variables. The average
maximum distance between the primary and backup under no message loss (the y-intercept) reduces
by about 30% for high client rates in Figure 8(a); similar reductions are seen for all message loss
probabilities. This occurs because schedule compression successfully utilizes idle ticks in the schedule
generated by the rate-monotonic scheduling algorithm; the utilization thus increases to 100% and
the primary sends approximately 30% more object updates to the backup. Compression plays a
relatively more important role in reducing the likelihood of an inconsistent backup, as can be seen
from
Figure
8(b). Also, compression reduces the impact of communication failures, since the extra
update transmissions effectively mask lost messages.
6.3 Backup View (Estimated Consistency)
Although Figure 8 provides a system-wide view of window consistency, the backup site has limited
knowledge of the primary state. The backup's view estimate of
the actual window consistency, as shown in Figure 9. The backup site uses this metric to evaluate its
Probability of message loss0.401.20Average
maximum
distance
w=300 msec (no compression)
w=700 msec (no compression)
w=300 msec (compression)
w=700 msec (compression)
Probability of message loss0.020.060.10
Probability
that
backup
is
inconsistent
w=300 msec (no compression)
w=700 msec (no compression)
w=300 msec (compression)
w=700 msec (compression)
(a) Average maximum distance (b) Probability(backup inconsistent)
Figure
9: Backup view
The plots show system performance from the backup's conservative
viewpoint, as a function of the client write rate, message loss, and schedule compression. As in
Figure
temporal consistency improves under schedule compression but worsens under increasing
message loss. The backup's view is impervious to the client write rate.
own window consistency to detect a crashed primary and effect a failover. As in Figure 8 message loss
increases the average maximum distance (Figure 9(a)) and the likelihood of an inconsistent backup
Figure
9(b)). Schedule compression also has similar benefits for the backup's estimate of window
consistency.
However, unlike Figure 8, the client write rate does not influence the backup's view of its window
consistency. The backup (pessimistically) assumes that the client writes an object on the primary
immediately after the primary transmits an update message for that object to the backup. For this
reason, the backup's estimate of the average maximum distance between the primary and the backup
is always worse than that derived from the omniscient view. It follows that this estimate is more
accurate for high client write rates, as can be seen by comparing Figures 8(a) and 9(a); for high
client rates relative to the window, are virtually identical. The window-consistent
replication model is designed to operate with high client write rates, relative to communication within
the service, so the backup typically has an accurate view of its temporal consistency.
6.4 Client View (Recovery Consistency)
The client view (- P measures the inconsistency between the primary and backup versions
on object reads; better recovery consistency provides a more accurate system state after failover. Since
the client can read at an arbitrary time, Figure 10 shows the time average of recovery inconsistency,
averaged across all objects, with and without compression. We attribute the minor fluctuations in the
graphs to noise in the measurements.
The distance metric is not sensitive to the client write rate, since frequent client writes increase
Probability of message loss0.200.601.00
Average
distance
w=300 msec (no compression)
w=700 msec (no compression)
w=300 msec (compression)
w=700 msec (compression)
Figure
10: Client view - P
This graph presents the time average of recovery inconsistency,
as a function of the client write rate, message loss, and schedule compression. Compressing the update
schedule improves consistency by generating more frequent update transmission, while message loss
worsens read consistency. The metric is largely independent of the client write rate.
both - P
when the client writes more often, the primary copy changes frequently (i.e.,
i (t) is close to t), but the backup also receives more recent versions of the data (i.e., - B
i (t) is close to
Moderate message loss does not have a significant influence on read inconsistency, especially
under schedule compression. As expected, schedule compression improves the read inconsistency seen
by the client significantly (- 30%). It is, therefore, an effective technique for improving the "goodness"
of the replicated data.
7 Conclusion and Future Work
Window consistency offers a framework for designing replication protocols with predictable timing
behavior. By decoupling communication within the service from the handling of client requests, a
replication protocol can handle a higher rate of read and write operations and provide more timely
response to clients. Scheduling the selective communication within the service provides bounds on
the degree of inconsistency between servers. While our prototype implementation has successfully
demonstrated the utility of the window-consistent replication model, more extensive evaluation is
needed to validate the ideas identified in this paper. We have recently added support for fault-
detection, failover, and integration of new backups. Further experiments on the current platform will
ascertain the usefulness of processor capacity reserves [25] and other RT-Mach features in implementing
the window-consistent replication service. The present work extends into several fruitful areas of
Object admission/scheduling: We are studying techniques to maximize the number of admitted objects
and improve objects' window consistency by optimizing object admission and update scheduling. For
the window-consistent replication service, the periodic task model is overly conservative in accepting
object registration requests; that is, it may either limit the number of objects that are accepted or
it may accept only those objects with relatively large windows. This occurs because, on average, the
periodic model schedules updates twice as often as necessary, in order to guarantee the required worst-case
spacing between update transmissions. We are exploring other scheduling algorithms, such as the
distance-constrained task model [13] which assigns task priorities based on separation constraints, in
terms of their implementation complexity and ability to accommodate dynamic creation/deletion of
objects.
We are also considering techniques to maximize the "goodness" of the replicated data. As one
possible approach, we are exploring ways to incorporate the client write rate in object admission
and scheduling. An alternate approach is to optimize the object window size itself by proportionally
shrinking object windows such that the system remains schedulable; this should improve each object's
worst-case temporal inconsistency. The selection of object window sizes can be cast as an instance of
the linear programming optimization problem. Schedule compression can still be used to improve the
utilization of the remaining available resources.
Inter-object window consistency: We are extending our window-consistent replication model to incorporate
temporal consistency constraints between objects. Our goal is to bound consistency in a
replicated set of related objects; new algorithms may be necessary for real-time update scheduling of
such object sets. This is related to the problem of ensuring temporally consistent objects in a real-time
database system; however, our goal is to bound consistency in a replicated set of related objects.
Alternative replication models: Although the current prototype implements a primary-backup architecture
with a single backup site, we are studying the additional issues involved in supporting multiple
backups. In addition, we are also exploring window consistency in the state-machine replication.
This would enable us to investigate the applicability of window consistency to alternative replication
models.
Acknowledgements
The authors wish to thank Sreekanth Brahmamdam and Hock-Siong Ang for their help in running
experiments and post-processing the collected data, and the reviewers for their helpful comments.
--R
"Data caching issues in an information retrieval system,"
"A principle for resilient sharing of distributed resources,"
"Real-time communication in packet-switched networks,"
"A NonStop kernel,"
"A highly available network file server,"
"Reliable communication in the presence of failures,"
"The process group approach to reliable distributed computing,"
"Tradeoffs in implementing primary-backup protocols,"
"Tradeoffs in implementing primary-backup protocols,"
"Understanding fault tolerant distributed systems,"
"Fault-tolerance in the advanced automation system,"
"Partial computation in real-time database systems,"
"Scheduling distance-constrained real-time tasks,"
"Real-time communication in multi-hop networks,"
"Dis- tributed fault-tolerant real-time systems: The MARS approach,"
"TTP - a protocol for fault-tolerant real-time systems,"
"Triggered real time databases with consistency constraints,"
"Ssp: a semantics-based protocol for real-time data access,"
"The rate monotonic scheduling algorithm: Exact characterization and average case behavior,"
"A model of hard real-time transaction systems,"
"Scheduling algorithms for multiprogramming in a hard real-time environment,"
"Imprecise computations,"
"Structuring communication software for quality-of- service guarantees,"
"Design and evaluation of a window-consistent replication service,"
"Processor capacity reserves: Operating system support for multimedia applications,"
"Consul: A communication substrate for fault-tolerant distributed programs,"
"Using volatile witnesses to extend the applicability of available copy protocols,"
"Replica control in distributed systems: An asynchronous approach,"
"Window-consistent replication for real-time applications,"
"Implementing fault-tolerant services using the state machine approach: A tutorial,"
"Real-Time Toward a predictable real-time system,"
"The extra performance architecture (XPA),"
--TR
--CTR
Hengming Zou , Farnam Jahanian, A Real-Time Primary-Backup Replication Service, IEEE Transactions on Parallel and Distributed Systems, v.10 n.6, p.533-548, June 1999 | temporal consistency;real-time systems;replication protocols;scheduling;fault tolerance |
266408 | Decomposition of timed decision tables and its use in presynthesis optimizations. | Presynthesis optimizations transform a behavioral HDL description into an optimized HDL description that results in improved synthesis results. We introduce the decomposition of timed decision tables (TDT), a tabular model of system behavior. The TDT decomposition is based on the kernel extraction algorithm. By experimenting using named benchmarks, we demonstrate how TDT decomposition can be used in presynthesis optimizations. | Introduction
Presynthesis optimizations have been introduced in [1] as source-level transformations that produce
"better" HDL descriptions. For instance, these transformations are used to reduce control-flow
redundancies and make synthesis result relatively insensitive to the HDL coding-style. They are
also used to reduce resource requirements in the synthesized circuits by increasing component
sharing at the behavior-level [2].
The TDT representation consists of a main table holding a set of rules, which is similar to the
specification in a FSMD [3], an auxiliary table which specifies concurrencies, data dependencies,
and serialization relations among data-path computations, or actions, and a delay table which
specifies the execution delay of each action.
The rule section of the model is based on the notions of condition and action. A condition may
be the presence of an input, or an input value, or the outcome of a test condition. A conjunction
of several conditions defines a rule. A decision table is a collection of rules that map condition
conjunctions into sets of actions. Actions include logic, arithmetic, input-output(IO), and message-passing
operations. We associate an execution delay with each action. Actions are grouped into
action sets, or compound actions. With each action set, we associate a concurrency type of serial,
parallel, or data-parallel [4].
Condition Stub Condition Entries
Action Stub Action Entries
Figure
1: Basic structure of TDTs.
The structure of the rule section is shown in Figure 1. It consists of four quadrants. Condition
stub is the set of conditions used in building the TDT. Condition entries indicate possible conjunctions
of conditions as rules. Action stub is the list of actions that may apply to a certain rule.
Action entries indicate the mapping from rules to actions. A rule is a column in the entry part of
the table, which consists of two halves, one in the condition entry quadrant, called decision part of
the rule, one in the action entry quadrant, called action part of the rule.
In additional to the set of rules specified in a main table (the rule section), the TDT representation
includes two auxiliary tables to hold additional information. Information specified in
the auxiliary tables include the execution delay of each action, serialization, data dependency, and
concurrency type between each pair of actions.
Example 1.1. Consider the following TDT:
a 1;1 a 1;2 a 2;1 a 2;2 a 3;1 a 3;2
a 1;1
s%
a 1;2
a 2;1 d %
a 2;2
a 3;1 p
a 3;2
delay
a
a
a
a
a
a
When actions a 1;1 and a 1;2 are selected for execution. Since action a 1;2 is specified as a
successor of a 1;1 , action a 1;1 is executed with a one cycle delay followed by the execution of a 1;2 . Symbols 'd'
and 'p' indicate actions that are data-parallel (i.e. parallel modulo data dependencies) and parallel actions
respectively. An arrow '%' at row a 1;1 and column a 1;2 indicates that a 1;1 appears before a 1;2 . In contrast,
an arrow '.' at row a 1;1 and column a 1;2 indicates that a 1;1 appears after a 1;2 . 2
The execution of a TDT consists of two steps: (1) select a rule to apply, (2) execute the action
sets that the selected rule maps to. More than two action sets may be selected for execution. The
order in which to execute those action sets are determined by the concurrency types, serialization
relations, and data dependencies specified among those action sets [4], indicated by 's', `d', and 'p'
in the table above.
An action in a TDT may be another TDT. This is referred to as a call to the TDT contained
as an action in the other TDT, which corresponds to the hierarchy specified in HDL descriptions.
Consider the following example.
Example 1.2. Consider the following calling hierarchy:
a
Here when c the action needs to be invoked is the call to TDT 2 , forces evaluation of condition c 2
resulting in actions a 2 or a 3 being executed. No additional information such as concurrency types needs to
be specified between action a 1 and TDT 2 since they lie on different control paths. For the same reason, we
omit the auxiliary table for TDT 2 . 2
Procedure/function calling hierarchy in input HDL descriptions results in a corresponding TDT
hierarchy. TDTs in a calling hierarchy are typically merged to increase the scope of presynthesis
optimizations. In the process of presynthesis optimizations, merging flattens the calling hierarchy
specified in original HDL descriptions. In this paper we present TDT decomposition which is
the reverse of the merging process. By first flattening the calling hierarchy and then extracting
the commonalities, we may find a more efficient behavior representation which leads to improved
synthesis results. This allows us to restructure HDL code. This code structuring is similar to the
heuristic optimizations in multilevel logic synthesis. In this paper, we introduce code-restructuring
in addition to other presynthesis optimization techniques such as column/row reduction and action
sharing that have been presented earlier [1, 4, 2].
The rest of this paper is organized as follows. In the next section, we introduce the notion
of TDT decomposition and relate it to the problem of kernel extraction in an algebraic form of
TDT. Section 3 presents an algorithm for TDT decomposition based on kernel extraction. Section
4 shows the implementation details of the algorithm and presents the experimental results. Finally,
we conclude in Section 5 and presents our future plan.
TDT decomposition is the process of replacing a flattened TDT with a hierarchical TDT that
represents an equivalent behavior. As we mentioned earlier, decomposition is the reverse process of
merging and together with merging, it allows us to produces HDL descriptions that are optimized for
subsequent synthesis tasks and are relatively insensitive to coding styles. Since this decomposition
uses procedure calling abstraction, arbitrary partitions of the table (condition/action) matrices are
not useful. To understand the TDT structural requirements consider the example below.
Example 2.1. Consider the following TDT.
Notice the common patterns in condition rows in c 6 and c 7 , and action rows in a 6 , a 7 , and a 8 . 2
Above in Example 2.1 is a flattened TDT. The first three columns have identical condition
entries in c 1 and c 2 , and identical action entries in a 1 and a 2 . These columns differ in rows
corresponding to conditions fc 4 , c 5 g and actions fa 3 , a 4 , a 5 g, which appear only in the first three
columns. This may result, for example, from merging a sub-TDT consisting of only conditions fc 4 ,
c 5 g and actions fa 3 , a 4 , a 5 g.
Note the common pattern in the flattened TDT may result from merging a procedure which is
called twice from the main program. Or it may simply correspond to commonality in the original
HDL description. Whatever the cause, we can extract the common part and make it into a separate
sub-TDT and execute it as an action from the main TDT.
Figure
2 shows a hierarchy of TDTs which specify the same behavior as the TDT in Example
2.1 under conditions explained later. The equivalence can be verified by merging the hierarchy
of TDTs [4]. Note that the conditions and actions are partitioned among these TDTs, i.e, no
conditions and actions are repeated amongs the TDTs.
a 9 1
a
a
a 5
a
a
a 8
Figure
2: One possible decomposition of the TDT in Example 2.1.
It is not always possible to decompose a given TDT into a hierarchical TDT as shown in
Figure
above. Neither is it always valid to merge the TDT hierarchy into flattened TDT [4].
These two transformations are valid only when the specified concurrency types, data dependencies,
and serializations are preserved. In this particular example, we assume that the order of execution
of all actions follows the order in which they appear in the condition stub. For the transformations
to be valid, we also require that:
ffl Actions a 1 and a 2 do not modify any values used in the evaluation of conditions c 4 and c 5 .
ffl Actions a 1 and a 2 do not modify any values used in the evaluation of conditions c 6 and c 7 .
Suppose we are given a hierarchical TDT as shown in Figure 2 to start with. After a merging
phase, we get the flattened TDT as shown in Example 2.1. In the decomposition phase, we can
choose to factor only TDT 3 because it is called more than once. Then the overall effect of merging
followed by TDT decomposition is equivalent to in-line expansion of the procedure corresponding to
This will not lead to any obvious improvement in hardware synthesis. However, it reduces
execution delay if the description is implemented as a software component because of the overhead
associated with software procedure calls.
The commonality in the flattened TDT may not result from multiple calls to a procedure
as indicated by TDT 3 in Figure 2. It could also be a result of commonality in the input HDL
specification. If this is the case, extraction will lead to a size reduction in the synthesized circuit.
The structural requirements for TDT decomposition can be efficiently captured by a two-level
algebraic representation of TDTs [2]. This representation only captures the control dependencies in
action sets and hence is strictly a sub set of TDT information. As we mentioned earlier, TDTs are
based on the notion of conditions and actions. For each condition variable c, we define a positive
condition literal, denoted as l c , which corresponds to an 'Y' value in a condition entry. We also
define a negative condition literal, denoted as l - c , which corresponds to an 'N' value in a condition
entry. A pair of positive and negative condition literals are related only in that they corresponds
to the same condition variable in the TDT.
We define a '\Delta' operator between two action literals and two conditions literals which represents
a conjunction operation. This operation is both commutative and associative.
A TDT is a set of rules, each of which consists of a condition part which determines when the
rule is selected, and an action part which lists the actions to be executed once a rule is selected for
execution. The condition part of a rule is represented as
Y
l c i
l c i
where ncond is the number of conditions in the TDT and ce(i) is the condition entry value at the
ith condition row for this rule. The action part of a rule is represented as
Y
l a i
where nact is the number of actions in the TDT and ae(i) is the action entry value at the ith action
row for this rule. A rule is a tuple, denoted by
As will become clear later, for the purpose of TDT decomposition a rule can be expressed as a
product of corresponding action and condition literals. We call such a product a cube. For a given
TDT, T , we define an algebraic expression, E T , that consists of disjunction of cubes corresponding
to rules in T .
For simplicity, we can drop the '\Delta' operator and `:' denotation and use 'c' or `a' instead of l c and
l a in the algebraic expressions of TDTs. However, note in particular that 'c' and `-c' are short-hand
notations for 'l c ' and 'l -
c ' and they do not follow Boolean laws such as These symbols follow
only algebraic laws for symbolic computation. For treatment of this algebra, the reader is referred
to [4].
Example 2.2. Here is the algebraic expression for the TDT in Example 2.1.
a 6 a 7 a 8
c 3 a 1 a 9
a 6 a 7 a 8
c 3 a 1 a 2 a 7 a 8
c 3 a 1 a 2 a 8
Note that there is no specification on delay, concurrency type, serialization relation, and data dependency.
Also notice that 'c', `-c', and 'a' are short-hand notations for `l c ', 'l - c ', `l a ' respectively. 2
2.1 Kernel Extraction
During TDT decomposition, it is important to keep an action literal or condition literal within
one sub-TDT, that is, the decomposed TDTs must partition the condition and action literals. To
capture this, we introduce the notion of support and TDT support.
Definition 2.1 The support of an expression is the set of literals that appear in the expression.
Definition 2.2 The TDT-support of an expression E T is the set of action literals and positive
condition literals corresponding to all literals in the support of the expression E T .
Example 2.3. Expression c 1 - c 2 c 3 - c 6 -c 7 a 2 a 8 is a cube. Its support is fc g. Its TDT
support is fc g. 2
We consider TDT decomposition into sub-TDTs that have only disjoint TDT-supports. TDT
decomposition uses algebraic division of TDT-expressions by using divisors to identify sub TDTs.
We define the algebraic division as folllows:
Definition 2.3 Let ff dividend remainder g be algebraic expressions. We say that
f divider is an algebraic divisor of f divider when we have f dividend
the TDT-support of f divisor and the TDT-support of f quotient are disjoint, and f divisor \Delta f quotient is
non-empty.
An algebraic divisor is called a factor when the remainder is void. An expression is said to be
cannot be factored by a cube.
Definition 2.4 A kernel of an expression is a cube-free quotient of the expression divided by a
cube, which is called the co-kernel of the expression.
Example 2.4. Rewrite the algebraic form of TDTExample 2:1 as follows.
c 7 a 8 )
The expression c 4 a 3 a 4 a 5
c 5 a 3 is cube-free. Therefore it is a kernel of TDTExample 2:1 . The
corresponding co-kernel is c 1 c 2 a 1 a 2 . Similarly, c 6 c 7 a 6 a 7 a 8 a 8 is also a kernel of
TDTExample 2:1 , which has two corresponding co-kernels: c 1 - c 2 c 3 a 2 and - c 1 a 1 a 2 . 2
3 Algorithm for TDT Decomposition
In this section, we present an algorithm for TDT decomposition. The core of the algorithm is
similar to the process of multi-level logic optimization. Therefore we first discuss how to compute
algebraic kernels from TDT-expressions before we show the complete algorithm which calls the
kernel computing core and addresses some important issues such as preserving data-dependencies
between actions through TDT decomposition.
3.1 Algorithms for Kernel Extraction
A naive way to compute the kernels of an expression is to divide it by the cubes corresponding to
the power set of its support set. The quotients that are not cube free are weeded out, and the others
are saved in the kernel set [5]. This procedure can be improved in two ways: (1) by introducing
a recursive procedure that exploits the property that a kernel of a kernel of an expression is also
the kernel of this expression, (2) by reducing the search by exploiting the commutativity of the '\Delta'
operator. Algorithm 3.1 shows a method adapted from a kernel extraction algorithm due to Brayton
and McMullen [6], which takes into account of the above two properties to reduce computational
complexity.
Algorithm 3.1 A Recursive Procedure Used in Kernel Extraction
INPUT: a TDT expression e, a recursion index j;
OUTPUT: the set of kernels of TDT expression e;
extractKernelR(e,
to n do
if (j getCubeSet(e; l i )j - 2) then
largest cube set containing l i s.t. getCubeSet(e;
if (l k 62 C8k ! i) then
endfor
In the above algorithm, getCubeSet(e; C) returns the set of cubes of e whose support includes C.
We order the literals so that condition literals appear before action literals. We use n as the index
of the last condition literal since a co-kernel containing only action literals does not correspond a
valid TDT decomposition. Notice that l c and l - c are two different literals as we explained earlier.
The algorithm is applicable to cube-free expressions. Thus, either the function e is cube-free or it
is made so by dividing it by its largest cube factor, determined by the intersection of the support
sets of all its cubes.
Example 3.1. After running Algorithm 3.1 on the algebraic expression of TDT 2:1 we get the following
set of kernels:
c 4 -c 5 a 3
c 7 a 7 a 8
c 5 a 3 ;
c 7 a 8 ;
Note that k 6
has a cube with no action literals. This indicates a TDT rule with no action selected for
execution if k 6 leads to a valid TDT decomposition. However, k 6 will be eliminated from the kernel set as
we explained later. 2
3.2 TDT Decomposition
Now we present a TDT decomposition algorithm which is based on the kernel extraction algorithm
presented earlier. The decomposition algorithm works as follows. First, the algebraic expression
of a TDT is constructed. Then a set of kernels are extracted from the algebraic expression. The
are eventually used to reconstruct a TDT representation in hierarchical form. Not all the
algebraic kernels may be useful in TDT decomposition since the algebraic expression carries only a
subset of the TDT information. We use a set of filtering procedures to delete from the kernel sets
kernels which corresponds to invalid TDT transformations or transformations producing models
that results in inferior synthesis results.
Algorithm 3.2 TDT Decomposition
INPUT: a flattened TDT tdt;
OUTPUT: a hierarchical TDT with root tdt
f
return tdt 0
The procedure constructAlgebraicExpression() builds the algebraic expression of tdt following
Algorithm 3.2. The function expression() builds an expression out of a set of sets according to the
data structure we choose for the two-level algrebraic expression for TDTs. The complexity of the
algorithm is O(AR+ CR) where A is the number of action in tdt, R is the number of rules in tdt,
and C is the number of conditions in tdt. The symbol 'OE' in the algorithm denotes an empty set.
Algorithm 3.3 Constructing Algebraic Expressions of TDTs
construct a positive condition literal l c i
construct a negative condition literal l - c i
endfor
do
construct an action literal l a i
endfor
R /\GammaOE; // empty set
do
r /\GammaOE;
if (ce(i;
if (ce(i;
endfor
return
Procedure extractKernel(sop) calls the recursive procedure extractKernelR(sop; 1) to get a
set of kernels of sop, the algebraic expression of tdt.
Some kernels appear only once in the algebraic expression of a TDT. These kernels would not
help in reducing the resource requirement and therefore they are trimmed from K using procedure
trimKernel1(). Algorithm 3.4 below shows the details of trimKernel1(). The function co \Gamma
Kernels(k; e) returns the set of co-kernels of kernel k for expression e. The number of co-kernels
corresponds to the number of times sub-TDT that corresponds to a certain kernel is called in the
hierarchy of TDTs.
Algorithm 3.4 Removing Kernels Which Correspond to Single Occurrence of a Pattern
in the TDT Matrices
foreach k 2 K do
if (j co-Kernels(k;
endforeach
Example 3.2. Look at the kernels in Example 3.1. The kernel k
will be
trimmed off by trimKernel1() since it has only one co-kernel. 2
Since information such as data dependency are not captured in algebraic form of TDTs, the
kernels in K may not corresponds to a decomposition which preserves data-dependencies specified
in the original TDT. These kernels are trimmed using procedure trimKernel2().
Algorithm 3.5 Removing Kernels Which Corresponds to an Invalid TDT Transformation
e, tdt) f
foreach k 2 K do
flag /\Gamma0;
foreach do
foreach action literal l a of q do
if action a modifies any condition corresponding to a condition literal of k then
foreach action literal l ff in k do
if (l a is specified to appear before l ff ) then
flag
endforeach
endforeach
endforeach
if (flag ==
endforeach
The worst case complexity of this algorithm is O(AR+CR) since the program checks no more
than once on each condition/action literal corresponding to a condition entry or action entry of tdt.
Example 3.3. Suppose in Example 2.1, a 2 modifies c 6 and the result of a 2 is also used a 6 . Because a 2
modifies c 6 , in the hierarchical TDT we need to specify that c 6 comes after TDT 2 to preserve the behavior.
However, this violates the data dependency specification between a 2 and a 6 . Therefore, under the condition
given here, kernel k
c 7 a 8 will be removed by trimKernel2(). 2
An expression may be a kernel of itself with a co-kernel of '1' if it is kernel free. However
this kernel is not useful for TDT decomposition. We use a procedure trimSelf() to delete an
expression from its kernel set that will used fro TDT decomposition. Also, as we mentioned earlier,
a kernel of an expression's kernel is a kernel of this expression. However, in this paper, we limit our
discussion on TDT decomposition involving only two levels of calling hierarchies. For this reason,
after removing an expression itself from its kernel sets, we also delete "smaller" kernels which are
also kernels of other kernels of this expression.
Algorithm 3.6 Other Kernel Trimming Routines
foreach k 2 K do
compute q and r s.t.
if TDTsupport(k) and TDTsupport(r) are not disjoint then
endforeach g
foreach k 2 K do
foreach q 2 K different from k do
if q is a kernel of k then
endforeach
endforeach
Example 3.4. Look at the kernels of E TDT2:1 . Kernels k 5 will be eliminated by trimKernel3() since a 5
and a 8 are also used in other cubes. For the same reason, k 6 and k 7 are also eliminated. 2
Finally, we reconstruct a hierarchical TDT representation using the remaining algebraic kernels
of the TDT expression. The algorithm is outlined below. It consists two procedures:
reconstruct TDT with Kernel(), and constructTDT () which is called by the other procedure
to build a TDT out of an algebraic expression. Again, the worse case complexity of the algorithm
is O(CR +AR).
Algorithm 3.7 Construct a Hierarchical TDT Using Kernels
INPUT: a flattened TDT tdt, its algebraic expression exp, a set of kernels K of exp;
OUTPUT: a new hierarchical TDT;
re Construct TDT with Kernels(tdt, K, exp) f
foreach k 2 K do
generate a new action literal l t for t;
compute q and r s.t.
endforeach
return constructTDT(tdt, e);
form condition stub using those conditions of tdt each of which has at least a corresponding
condition literal in e;
form condition matrix according to the condition literals appearing in each cube of e;
form action stub using those conditions of tdt each of which has at least a corresponding
action literal in e and those "new" action literals corresponding to extracted sub-TDTs;
form action matrix according to the action literals appearing in each cube of e;
using the above components;
return
Example 3.5. Assume expression c 6 c 7 a 6 a 7 a 8 a 8 is the only kernel left after
trimming procedures are performed on the kernel set K of the algebraic expression of TDTExample 2:1 . A
hierarchical TDT as shown in below will be constructed after running reconstruct TDT with Kernels().
a
a
a
a 9 1
a
a
a
4 Implementation and Experimental Results
To show the effect of using TDT decomposition in presynthesis optimizations, we have incorporated
our decomposition algorithm in PUMPKIN, the TDT-based presynthesis optimization tool [4].
Figure
3 shows the flow diagram of the process presynthesis optimizations. The ellipse titled
"kernel extraction" in Figure 3 show where the TDT decomposition algorithm fits in the global
picture of presynthesis optimization using TDT.
Assertions
parser
merger
merged TDT
optimizer
optimized TDT
code generator
optimized HDL
input HDL
(a)
Assertions
user specification
(b)
merged TDT
column reduction
row reduction
kernel extraction
optimized TDT
action sharing
Figure
3: Flow diagram for presynthesis optimizations: (a) the whole picture, (b) details of the
optimizer.
Our experimental methodology is as follows. The HDL description is compiled into TDT mod-
els, run through the optimizations, and finally output as a HardwareC description. This output is
provided to the Olympus High-Level Synthesis System [7] for hardware synthesis under minimum
area objectives. We use Olympus synthesis results to compare the effect of optimizations on hardware
size on HDL descriptions. Hardware synthesis was performed for the target technology of
LSI Logic 10K library of gates. Results are compared for final circuits sizes, in term of number of
cells. In addition to the merging algorithms, the column and row optimization algorithms originally
implemented in PUMPKIN [1], we have added another optimization step of TDT decomposition.
To evaluate the effectiveness of this step, we turn off column reduction, row reduction, and action
sharing pahses and run PUMPKIN with several high-level synthesis benchmark designs.
Table
1: Synthesis Results: cell counts before and after TDT decomposition is carried out.
design module circuit size (cells) \Delta%
before after
daio phase decoder 1252 1232 2
receiver 440 355 19
comm DMA xmit 992 770 22
exec unit 864 587
cruiser State 356 308 14
Table
1 shows the results of TDT decomposition on examples designs. The design 'daio' refers to
the HardwareC design of a Digital Audio Input-Output chip (DAIO) [8]. The design 'comm' refers
to the HardwareC design of an Ethernet controller [9]. The design 'cruiser' refers to the HardareC
design of a vehicle controller. The description 'State' is the vehicle speed regulation module. All
designs can be found in the high-level synthesis benchmark suite [7]. The percentage of circuit size
reduction is computed for each description and listed in the last column of Table 1. Note that this
improvement depends on the amount of commonality existing in the input behavioral descriptions.
5 Conclusion and Future Work
In this paper, we have introduced TDT decomposition as a complementary procedure to TDT
merging. We have presented a TDT decomposition algorithm based on kernel extraction on an
algebraic form of TDTs. Combining TDT decomposition and merging, we can restructure HDL
descriptions to obtain descriptions that lead to either improved synthesis results or more efficient
compiled code. Our experiment on named benchmarks shows a size reduction in the synthesized
circuits after code restructuring.
Sequential Decomposition (SD) has been proposed in [10] to map a procedure to a separate hardware
component which is typically specified with a process in most HDLs. Using SD, a procedure
can be mapped on an off-shelf component with fixed communication protocol while a complement
protocol can be constructed accordingly on the rest (synthesizable part) of the system. Therefore
as a future plan of the research presented in this paper, we plan to combine SD and TDT decomposition
to obtain a novel system partitioning scheme which works on tabular representations.
We will investigate the possible advantages/disadvantages of this approach over other partitioning
approaches.
--R
"HDL Optimization Using Timed Decision Tables,"
"Limited exception modeling and its use in presynthesis optimizations,"
Specification and Design of Embedded Systems.
"System modeling and presynthesis using timed decision tables,"
Synthesis and Optimization of Digital Circuits.
"The decomposition and factorization of boolean expressions,"
"The Olympus Synthesis System for Digital Design,"
"Design of a digital input output chip,"
"Decomposition of sequential behavior using interface specification and complementation,"
--TR
High level synthesis of ASICs under timing and synchronization constraints
Specification and design of embedded systems
HDL optimization using timed decision tables
Limited exception modeling and its use in presynthesis optimizations
Synthesis and Optimization of Digital Circuits
The Olympus Synthesis System
--CTR
Jian Li , Rajesh K. Gupta, HDL code restructuring using timed decision tables, Proceedings of the 6th international workshop on Hardware/software codesign, p.131-135, March 15-18, 1998, Seattle, Washington, United States
J. Li , R. K. Gupta, An algorithm to determine mutually exclusive operations in behavioral descriptions, Proceedings of the conference on Design, automation and test in Europe, p.457-465, February 23-26, 1998, Le Palais des Congrs de Paris, France
Sumit Gupta , Rajesh Kumar Gupta , Nikil D. Dutt , Alexandru Nicolau, Coordinated parallelizing compiler optimizations and high-level synthesis, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.4, p.441-470, October 2004 | benchmarks;presynthesis optimizations;timed decision table decomposition;decision tables;system behavior model;TDT decomposition;behavioral HDL description;circuit synthesis;optimized HDL description;kernel extraction algorithm |
266421 | Effects of delay models on peak power estimation of VLSI sequential circuits. | Previous work has shown that maximum switching density at a given node is extremely sensitive to a slight change in the delay at that node. However, when estimating the peak power for the entire circuit, the powers estimated must not be as sensitive to a slight variation or inaccuracy in the assumed gate delays because computing the exact gate delays for every gate in the circuit during simulation is expensive. Thus, we would like to use the simplest delay model possible to reduce the execution time for estimating power, while making sure that it provides an accurate estimate, i.e., that the peak powers estimated will not vary due to a variation in the gate delays. Results for four delay models are reported for the ISCAS85 combinational benchmark circuits, ISCAS89 sequential benchmark circuits, and several synthesized circuits. | Introduction
The continuing decrease in feature size and increase in chip
density in recent years give rise to concerns about excessive
power dissipation in VLSI chips. As pointed out in [1], large
instantaneous power dissipation can cause overheating (lo-
cal hot spots), and the failure rate for components roughly
doubles for every increase in operating temperature.
Knowledge of peak power dissipation can help to determine
the thermal and electrical limits of a design.
The power dissipated in CMOS logic circuits is a complex
function of the gate delays, clock frequency, process param-
eters, circuit topology and structure, and the input vectors
applied. Once the processing and structural parameters have
been fixed, the measure of power dissipation is dominated by
the switching activity (toggle counts) of the circuit. It has
been shown in [2] and [3] that, due to uneven circuit delay
paths, multiple switching events at internal nodes can result,
and power estimation can be extremely sensitive to different
gate delays. Both [2] and [3] computed the upper bound of
maximum transition (or switching) density of individual internal
nodes of a combinational circuit via propagation of
uncertainty waveforms across the circuit. However, these
measures cannot be used to compute a tight upper bound of
the overall power dissipation of the entire circuit.
Although maximum switching density of a given internal
node can be extremely sensitive to the delay model used,
it is unclear whether peak power dissipation of the entire
circuit is also equally sensitive to the delay model. Since
This research was conducted at the University of Illinois and
was supported in part by the Semiconductor Research Corporation
under contract SRC 96-DP-109, in part by DARPA under contract
DABT63-95-C-0069, and by Hewlett-Packard under an equipment
grant.
glitches and hazards are not taken into account in a zero-delay
framework, the power dissipation measures from the
zero-delay model may differ greatly from the actual powers.
Will the peak power be vastly different among various non-zero
delay models (where glitches and hazards are accounted
for) in sequential circuits? Moreover, the delays used for
internal gates in most simulators are mere estimates of the
actual gate delays. Can we still have confidence in the peak
power estimated using these delay measures considering that
the actual delays for the gates in the circuit may be different?
Since computing the exact gate delays for every gate in the
circuit during simulation is expensive, does there exist a simple
delay model such that the execution time for estimating
power is reduced and an accurate peak power estimate can
be obtained (i.e., the peak powers estimated will not vary
due to a variation in the gate delays)?
Several approaches to measuring maximum power in
CMOS VLSI circuits have been reported [4-10]. Unlike average
power estimations [11-17], where signal switching probabilities
are sufficient to compute the average power, peak
power is associated with a specific starting circuit state S and
a specific sequence of vectors Seq that produce the power.
Two issues are addressed in this paper. First, given a (S i ,
that generates peak power under delay model
DM i , is it possible to obtain another tuple (S
generates equal or higher dissipation under a different delay
model DM j ? Second, will the (S i ,
peak power under delay model DM i also produce near peak
power under a different delay model DM j ?
Four different delay models are studied in this work: zero
delay, unit delay, type-1 variable delay, and type-2
variable delay. The three delay models used in [9] are
the same as the first three delay models of this work. Measures
of peak power dissipation are estimated for all four
delay models over various time periods. Genetic algorithms
(GA's) are chosen as the optimization tool for this problem
as in [10]. GA's are used to find the vector sequences which
most accurately estimate the peak power under various delay
models as well as over various time durations. The estimates
obtained for each delay model are compared with the estimates
from randomly-generated sequences, and the results
for combinational circuits are compared with the automatic
test generation (ATG) based technique [9] as well. The GA-based
estimates will be shown to achieve much higher peak
power dissipation in short execution times when compared
with the random simulation.
The remainder of the paper is organized as follows. Section
2 explains the delay models, and Section 3 discusses the
peak power measures over various time periods. The GA
framework for power estimation is described in Section 4.
Experimental results on the effects of various delay models
are discussed in Section 5, and Section 6 concludes the paper.
Delay Models
Zero delay, unit delay, and two types of variable delay
models are studied here. The zero delay model assumes that
no delay is associated with any gate; no glitches or hazards
will occur under this model. The unit delay model assigns
identical delays to every gate in the circuit independent of
gate size and numbers of fanins and fanouts. The type-1
variable delay model assigns the delay of a given gate to be
proportional to the number of fanouts at the output of the
gate. This model is more accurate than the unit delay model;
however, fanouts that feed bigger gates are not taken into ac-
count, and inaccuracies may result. The fourth model is a
different variable delay model which is based on the number
of fanouts as well as the sizes of successor gates. The gate
delay data for various types and sizes of gates are obtained
from a VLSI library. The difference between the type-1 and
variable delay models for a typical gate is illustrated
in
Figure
1. From the figure, the output capacitance of gate
Type 1 variable:
G3
Delay for gate G1 is 2 units.
Delay for driving 2-input
Type 2 variable:
Delay for G1 is
Delay for driving 4-input units.
Figure
1: Variable Delay Models.
G1 is estimated to be 2 (the number of fanouts) in the type-
delay model, while the delay calculated using the
variable delay model is proportional to the delay associated
with driving the successor gates G2 and G3, or
simply 3. Since the type-1 variable delay model
does not consider the size of the succeeding gates, the delay
calculations may be less accurate.
3 Peak Power Measures
Three types of peak power are used in the context of sequential
circuits for comparing the effects of delay models.
An automatic procedure was implemented for various delay
models that obtains these measures and generates the actual
input vectors that attain them, as described in [10]. They
are Peak Single-Cycle Power, Peak n-Cycle Power, and Peak
Sustainable Power, covering time durations of one clock cy-
cle, several consecutive clock cycles, and an infinite number
of cycles, respectively. The unit of power used throughout
the paper is energy per clock cycle and will simply be referred
to as power. In a typical sequential circuit, the switching
activity is largely controlled by the state vectors and less influenced
by input vectors, because the number of flip-flops
far outweighs the number of primary inputs. In all three
cases, the power dissipated in the combinational portion of
the sequential circuit can be computed as
dd
2 \Theta cycle period \Theta
for all gates g
[toggles(g) \Theta C(g)];
where the summation is performed over all gates g, and
toggles(g) is the number of times gate g has switched from 0
to 1 or vice versa within a given clock cycle; C(g) represents
the output capacitance of gate g. In this work, we made
the assumption that the output capacitance for each gate
is equal to the number of fanouts for all four delay models;
however, assigned gate output capacitances can be handled
by our optimization technique as well.
Peak single-cycle switching activity generally occurs when
the greatest number of capacitive nodes are toggled between
two consecutive vectors. For combinational circuits, the task
is to search for a pair of vectors (V1 , V2 ) that generates the
most gate transitions. For sequential circuits, on the other
hand, the activity depends on the initial state as well as the
primary input vectors. The estimate for peak power dissipation
can be used as a lower-bound for worst-case power
dissipation in the circuit in any given time frame. Our goal
in this work is to find and compare such bounds for the peak
power dissipation of a circuit under different delay assumptions
Peak n-cycle switching activity is a measure of the peak
average power dissipation over a contiguous sequence of n
vectors. This measure serves as an upper-bound to peak
sustainable power, which is a measure of the peak average
power that can be sustained indefinitely. Both measures are
considered only for sequential circuits. The n-cycle power
dissipation varies with the sequence length n. When n is
equal to 2, the power dissipation is the same as the peak
single-cycle power dissipation, and as n increases, the average
power is expected to decrease if the peak single-cycle
power dissipation cannot be sustained over the n vectors in
the sequence.
We computed peak sustainable power by finding the synchronizing
sequence for the circuit that produces greatest
power [10]. Unlike the approach proposed in [7], where symbolic
transition counts based on binary decision diagrams
(BDD's) were used to compute maximum power cycles, no
state transition diagrams are needed in our approach. Ba-
sically, the method used in [7] was to find the maximal average
length cycle in the state transition graph, where the
edge weights in the STG indicate the power dissipation in
the combinational portion of the circuit between two adjacent
states. However, the huge sizes of STG's and BDD's
in large circuits make the approach infeasible and impracti-
cal. Our approach restricts the search for peak power loops
to be a synchronizing sequence, thereby limiting the search
to a subset of all loops, and the resulting peak power may
not be a very tight lower bound. However, our experiments
have shown that this approach still yields peaks higher than
extensive random search [10].
4 GA Framework for Power Estimation
The GA framework used in this work is similar to the simple
GA described by Goldberg [18]. The GA contains a population
of strings, also called chromosomes or individuals, in
which each individual represents a sequence of vectors. Peak
n-cycle power estimation requires a search for the (n
tuple (S1 , V1 , ., Vn+1) that maximizes power dissipation.
This (n + 2)-tuple is encoded as a single binary string, as
illustrated in Figure 2. The population size used is a function
of the string length, which depends on the number of
primary inputs, the number of flip-flops, and the vector sequence
length n. Larger populations are needed to accommodate
longer vector sequences in order to maintain diversity.
Figure
2: Encoding of an Individual.
The population size is set equal to 32 \Theta
sequence length
when the number of primary inputs is less than 16 and
128 \Theta
sequence length when the number of primary inputs
is greater than or equal to 16. Each individual has an
associated fitness, which measures the quality of the vector
sequence in terms of switching activity, indicated by the total
number of capacitive nodes toggled by the individual. The
delay model and the amount of capacitive-node switching
are taken into account during the evolutionary processes of
the GA via the fitness function. The fitness function is a
simple counting function that measures the power dissipated
in terms of amount of capacitive-node switching in a given
time period.
The population is first initialized with random strings. A
variable-delay logic simulator is then used to compute the
fitness of each individual. The evolutionary processes of se-
lection, crossover, and mutation are used to generate an entirely
new population from the existing population. This
process continues for 32 generations, and the best individual
in any generation is chosen as the solution. We use tournament
selection without replacement and uniform crossover,
and mutation is done by simply flipping a bit. In tournament
selection without replacement, two individuals are randomly
chosen and removed from the population, and the
best is selected; the two individuals are not replaced into
the original population until all other individuals have also
been removed. Thus, it takes two passes through the parent
population to completely fill the new population. In uniform
crossover, bits from the two parents are swapped with
probability 0.5 at each string position in generating the two
offspring. A crossover probability of 1 is used; i.e., the two
parents are always crossed in generating the two offspring.
Since the majority of time spent by the GA is in the fitness
evaluation, parallelism among the individuals can be
exploited. Parallel-pattern simulation [19] is used to speed
up the process in which candidate sequences from the population
are simulated simultaneously, with values bit-packed
into 32-bit words.
5 Experimental Results
Peak powers for four different delay models were estimated
for ISCAS85 combinational benchmark circuits, ISCAS89 sequential
benchmark circuits, and several synthesized circuits.
Functions of the synthesized circuits are as follows: am2910
is a 12-bit microprogram sequencer [20]; mult16 is a 16-bit
two's complement shift-and-add multiplier; div16 is a 16-bit
divider using repeated subtraction; and proc16 is a 16-bit
microprocessor with 373 flip-flops. Table 1 lists the total
number of gates and capacitive nodes for all circuits. The
total number of capacitive nodes in a circuit is computed
as the total number of gate inputs. All computations were
performed on an HP 9000 J200 with 256 MB RAM.
The GA-based power estimates are compared against the
best estimates obtained from randomly generated vector se-
quences, and the powers are expressed in peak switching
frequency per node (PSF), which is the average frequency
of peak switching activity of the nodes (ratio of the
Table
1: Numbers of Capacitive Nodes in Circuits
Circuit Gates Cap Circuit Gates Cap
Nodes Nodes
c7552 3828 6252 s5378 3043 4440
s526 224 472
number of transitions on all nodes to the total number of
capacitive nodes) in the circuit. Notice that for the non-zero
delay models, PSF's greater than 1.0 are possible due
to repeated switching on the internal nodes within one clock
cycle.
We have previously shown that estimates made by our
GA-based technique are indeed good estimates of peak
power, and that if random-based methods were to achieve
similar levels of peak power estimates, execution times are
several orders of magnitude higher than for the GA-based
technique [10]. Peak power estimation results will now be
presented for the various delay models, and correlations between
the delay models will be discussed.
5.1 Effects of the four delay models
For the ISCAS85 combinational circuits, Table 2 compares
the GA-based results against the randomly-generated
sequences as well as those reported in [9]. For each circuit,
results for all four delay models are reported. The estimates
obtained from the best of randomly-generated vector-pairs,
an ATG-based approach [9], and the GA-based technique are
shown. The ATG-based approach used in [9] attempts to optimize
the total number of nodes switching in the expanded
combinational circuit; its results were compared against the
best of only 10,000 random simulations. In our work, on
the other hand, the number of random simulations depends
on the GA population size, which is a function of the number
of primary inputs in the circuit. Typically, the number
of random simulations exceeds 64,000. The GA-based estimates
are the highest for all of the circuits and for all delay
models, except for one zero-delay estimate in c1355, where
a 1.1% lower power was obtained. The average improvements
made by the GA-based technique over the random
simulations are 10.8%, 27.4%, 38.5%, and 35.5%, for the
four delay models, respectively. Since [9] estimated power
on the expanded circuits, backtrace using zero-delay techniques
was used. Consequently, only hazards were captured;
zero-width pulses were not taken into account in the non-zero
delay models. For this reason, our GA-based estimates are
higher than all the estimates obtained in [9]. Furthermore,
results from [9] suggested that peak power estimated from
the unit-delay model consistently gave higher values than
both zero-delay and type-1 variable-delay estimates. We do
not see such consistency from our results in Table 2. The estimated
powers from random simulations do not show such
a trend either. Nevertheless, it is observed that peak powers
Table
2: Peak Single-Cycle Power for ISCAS85 Combinational Circuits
Circuit Zero Unit Type-1 Variable Type-2 Variable
Rnd [9] GA Rnd [9] GA Rnd [9] GA Rnd [9] GA
c432 0.636 0.536 0.805 1.985 1.985 2.362 2.557 1.181 3.399 2.286 - 2.522
Impr 10.8% 27.4% 38.5% 35.5%
Impr: Average improvements over the best random estimates
for some circuits are very sensitive to the underlying delay
model. For instance, in circuit c3540, the peak power estimates
are 0.600, 2.684, 3.555, and 4.678 for zero, unit, type-1
variable, and type-2 variable delay models.
The results for sequential circuits using the four delay
models are shown in Table 3. The length of the sequence
in each case is set to 10 for N-cycle and sustainable power
estimates. Results for best of the random simulations are
not shown; however, the average improvements made by the
GA-based estimates are shown at the bottom of the table.
The number of random simulations is also the same as the
number of simulations required in the GA-based technique.
For the small circuits, the number of simulations is around
64,000. The GA-based estimates surpass the best random
estimates for all circuits for all three peak power measures
under all four delay models. The average improvements of
the GA-based estimates for each delay model are shown in
the bottom row of Table 3. Up to 32.5% improvement is
obtained on the average for peak N-cycle powers.
Across the four delay models, estimates made from the
zero-delay model consistently gave significantly lower power,
since glitches and hazards were not accounted for. However,
there is not a clear trend as to which of the other three delay
models will consistently give peak or near peak power esti-
mates. In circuits such as s641, s713, and am2910, the peak
powers are very sensitive to the delay model. Over 100% increase
in power dissipation can result when a different delay
model is used. For some other circuits, such as s400, s444,
s526, and s1238, the peak powers estimated are quite insensitive
to the underlying delay model; less than 5% difference
in the estimates is observed from the three delay models.
The execution times needed for the zero-delay estimates
are typically smaller, since few events need to be evaluated,
while execution times for the other three delay models are
comparable. Table 4 shows the execution times for the GA
optimization technique for the unit-delay model. The execution
times are directly proportional to the number of events
generated in the circuit during the course of estimation. For
this reason, peak n-cycle power estimates do not take 10
times as much computation as peak single-cycle power for
since the amount of activity across the 10 cycles is
not 10 times that of the peak single cycle. For circuits in
which peak power estimates differ significantly among various
delay models, the computation costs will also differ significantly
according to the number of activated events. The
execution times required for the random simulations are very
close to the GA-based technique since identical numbers of
simulations are performed.
5.2 Cross-effects of delay models
Although peak powers are sensitive to the underlying delay
model, it would be interesting to study if the vectors
optimized under one delay model will produce peak or near-peak
power under the other delay models. If we can find a
delay model in which derived sequences consistently generate
near peak powers under the other delay models, we can
conclude that, although peak power estimation is sensitive
to the underlying delay model, there exists a delay model by
which optimized sequences generate high powers regardless
of the delay model used.
Experiments were therefore carried out to study these
cross-effects of delay models. Table 5 shows the results for
the combinational circuits. For each circuit, the previously
optimized peak power measures for various delay models (the
same as those in Table 2) are shown in the Opt columns.
The power produced from applying vectors optimized for a
given delay model on the other three delay models are listed
in the adjacent (Eff) columns. For example, in circuit c2670
under the zero-delay model (upper-left quadrant of the ta-
ble), the original peak single-cycle switching frequency per
node (PSF) for the unit-delay model was 2.251 (Opt); how-
ever, when vectors optimized for the zero-delay model are
simulated using the unit-delay model, the measured PSF is
1.150 (Eff). The Eff measures that exceed the optimized
(Opt) measures are italicized. At the bottom of the table,
Dev shows the average amount that the Eff values deviated
from the average Opt values. For instance, when the optimized
vectors on the unit-delay model are simulated using
the zero-delay assumption, an average deviation of 18.9%
results from the GA-optimized zero-delay powers. On the
average, the Eff peak powers deviated from the optimized
values for all delay models as indicated by the Dev values.
However, the vectors optimized for the zero-delay model produced
significantly lower power when they were simulated in
the non-zero delay environments; over 60% drops were observed
from the GA-optimized powers, as indicated by the
Dev metric. The vectors that were optimized on the remaining
three delay models, on the other hand, deviated
less significantly when simulated using other non-zero delay
models, all less than 10% deviations.
For sequential circuits, we will first look at the vectors optimized
under the zero-delay assumption. Table 6 shows the
results. The Opt and Eff values are defined the same way
as before. For example, in circuit s382, the peak single-cycle
switching frequency per node (PSF) under unit delay was
however, when the vector sequence optimized for the
zero-delay model is simulated using the unit-delay model, the
Table
3: GA-Based Power Estimates for ISCAS89 Sequential Circuits
Circuit Single Cycle N-Cycle Sustainable
Z U V1 V2 Z U V1 V2 Z U V1 V2
Impr 11.9% 12.3% 20.4% 22.3% 17.4% 32.5% 26.7% 32.0% 12.3% 23.8% 24.4% 30.2%
Z: Zero U: Unit V1: Type-1 Variable V2: Type-2 Variable
Impr: Average improvements over the best random estimates
Table
4: Execution Times for the GA-Based Technique (seconds)
Circuit Single N- Sustain Circuit Single N- Sustain Circuit Single N- Sustain
cycle cycle able cycle cycle able cycle cycle able
measured PSF is 0.952. A similar format is used for the n-cycle
and sustainable powers. Average deviation values Dev
are also displayed at the bottom of the table. On average,
the Eff peak powers deviated significantly from the powers
optimized by the GA for both unit and type-1 variable delay
models, as indicated in the Dev row. When examining
the results for each circuit individually, none of the single-cycle
vectors derived under the zero-delay model exceed the
previously optimized peak powers for unit and type-1 variable
delay models. However, several occasions of slightly
higher peak n-cycle and sustainable powers have been obtained
by the zero-delay-optimized vectors because of the
difficulty in finding the optimum in the huge search space
for the (n+2)-tuple needed for the peak n-cycle and sustainable
powers. Nevertheless, the number of these occasions is
quite small. For the small circuits s298 to s526, the results
show that the zero-delay-optimized vectors produced peak or
near-peak powers for the unit and type-1 variable delay models
as well. One plausible explanation for this phenomenon
is that the peak switching frequencies (PSF's) for these circuits
are small, typically less than 1.2, indicating that most
nodes in the circuit do not toggle multiple times in a single
clock cycle. For the other circuits, especially for circuits
where high PSF's are obtained, the vectors obtained which
generated high peak powers under zero-delay assumptions do
not provide peak powers under non-zero delay assumptions.
The last four synthesized circuits, am2910, mult16, div16,
and proc16, showed widened gaps. The effects on type-2
variable delay are similar to those on type-1 variable delay.
Similarly, vectors optimized under the non-zero-delay
models were simulated using other delay models, and the
results are shown in Table 7 for type-2 variable delay. The
trends for the unit-delay and type-1 variable delay models
are similar to those seen for the type-2 variable delay model.
The results are not compared with the zero-delay model here,
since their trends are similar to those for the combinational
circuits, where large deviations exist. When the optimized
powers are great, i.e., PSF's are greater than 2, a greater deviation
is observed. Such cases can be seen in circuits s641,
s713, am2910, and div16. For instance, the optimized peak
single-cycle unit-delay power for s713 is 2.815, but the power
produced by applying the vectors optimized for the type-2
variable delay model is only 1.273. On the contrary, less
significant deviation is observed in circuits for which smaller
PSF's are obtained. For the n-cycle and sustainable pow-
ers, although deviations still exist when compared with the
GA-optimized vectors (shown in Dev), they are small devi-
ations. Furthermore, deviations between type-1 and type-2
delay models are smaller when compared with the unit-delay
model, suggesting that the two variable delay models are
more correlated.
6 Conclusions
When estimating peak power under one delay model, it
is crucial to have confidence that the estimate will not vary
significantly when the actual delays in the circuit differ from
Table
5: Effects of Various Delay Models for ISCAS85 Combinational Circuits
Effects of Zero-delay Vectors On. Effects of Unit-delay Vectors On.
Circuit Unit Type-1 Var Type-2 Var Zero Type-1 Var Type-2 Var
Opt Eff Opt Eff Opt Eff Opt Eff Opt Eff Opt Eff
c432 2.362 2.035 3.399 2.181 2.522 2.192 0.636 0.455 3.399 2.904 2.522 3.032
c1355 3.260 1.000 2.707 1.011 2.676 1.042 0.533 0.441 2.707 2.391 2.676 3.143
c2670 2.251 1.150 2.750 1.294 2.825 1.320 0.623 0.572 2.750 2.364 2.825 2.355
c7552 2.821 1.619 2.833 1.757 3.238 1.788 0.602 0.537 2.833 3.089 3.238 3.012
Effects of Type-1 Var-delay Vectors On. Effects of Type-2 Var-delay Vectors On.
Circuit Zero Unit Type-2 Var Zero Unit Type-1 Var
Opt Eff Opt Eff Opt Eff Opt Eff Opt Eff Opt Eff
c432 0.636 0.496 2.362 2.373 2.522 3.399 0.636 0.411 2.362 1.776 3.399 2.155
Dev: Average deviation of Eff from Opt.
the delay model assumed. Peak power estimation under four
different delay models for single-cycle, n-cycle, and sustainable
power dissipation has been presented. For most circuits,
vector sequences optimized under the zero-delay assumption
do not produce peak or near-peak powers when they are simulated
under a non-zero delay model. Similarly, the vectors
optimized under non-zero delay models do not produce peak
or near-peak zero-delay powers. However, vector sequences
optimized under non-zero delay models provide good measures
for other non-zero delay models, with only a slight
deviation, for most combinational circuits. For sequential
circuits, small deviations are observed when the optimized
peak powers are small, but when the optimized peak powers
are large, i.e., nodes switch multiple times in a single cycle
on the average, the estimated peak powers will be sensitive
to the underlying delay model.
--R
"Shrinking devices put the squeeze on system packag- ing,"
"Extreme delay sensitivity and the worst-case switching activity in VLSI circuits,"
"Estimation of maximum transition counts at internal nodes in CMOS VLSI circuits,"
"Estimation of power dissipation in CMOS combinational circuits using boolean function manipulation,"
"Worst case voltage drops in power and ground busses of CMOS VLSI circuits,"
"Resolving signal correlations for estimating maximum currents in CMOS combinational circuits,"
"Computing the maximum power cycles of a sequential circuit,"
"Maximum power estimation for sequential circuits using a test generation based technique,"
"Maximizing the weighted switching activity in combinational CMOS circuits under the variable delay model,"
"K2: An estimator for peak sustainable power of VLSI circuits,"
"Estimation of average switching activity in combinational and sequential circuits,"
"A survey of power estimation techniques in VLSI circuits,"
"Statistical estimation of sequential circuit activity,"
"Accurate power estimation of CMOS sequential circuits,"
"Switching activity analysis using boolean approximation method,"
"Power estimation methods for sequential logic circuits,"
"Power estimation in sequential circuits,"
Genetic Algorithms in Search
Digital Systems Testing and Testable Design.
"The AM2910, a complete 12-bit microprogram sequence controller,"
--TR
Estimation of average switching activity in combinational and sequential circuits
Resolving signal correlations for estimating maximum currents in CMOS combinational circuits
Worst case voltage drops in power and ground buses of CMOS VLSI circuits
A survey of power estimation techniques in VLSI circuits
Power estimation methods for sequential logic circuits
Computing the maximum power cycles of a sequential circuit
Extreme delay sensitivity and the worst-case switching activity in VLSI circuits
Power estimation in sequential circuits
Switching activity analysis using Boolean approximation method
Statistical estimation of sequential circuit activity
Estimation of maximum transition counts at internal nodes in CMOS VLSI circuits
Accurate power estimation of CMOS sequential circuits
K2
Genetic Algorithms in Search, Optimization and Machine Learning
Maximizing the weighted switching activity in combinational CMOS circuits under the variable delay model
--CTR
Chia-Chien Weng , Ching-Shang Yang , Shi-Yu Huang, RT-level vector selection for realistic peak power simulation, Proceedings of the 17th great lakes symposium on Great lakes symposium on VLSI, March 11-13, 2007, Stresa-Lago Maggiore, Italy
Hratch Mangassarian , Andreas Veneris , Sean Safarpour , Farid N. Najm , Magdy S. Abadir, Maximum circuit activity estimation using pseudo-boolean satisfiability, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Nicola Nicolici , Bashir M. Al-Hashimi, Scan latch partitioning into multiple scan chains for power minimization in full scan sequential circuits, Proceedings of the conference on Design, automation and test in Europe, p.715-722, March 27-30, 2000, Paris, France
V. R. Devanathan , C. P. Ravikumar , V. Kamakoti, Interactive presentation: On power-profiling and pattern generation for power-safe scan tests, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Michael S. Hsiao, Peak power estimation using genetic spot optimization for large VLSI circuits, Proceedings of the conference on Design, automation and test in Europe, p.38-es, January 1999, Munich, Germany
Nicola Nicolici , Bashir M. Al-Hashimi, Multiple Scan Chains for Power Minimization during Test Application in Sequential Circuits, IEEE Transactions on Computers, v.51 n.6, p.721-734, June 2002 | n-cycle power;sustainable power;variable delay;peak power;genetic optimization |
266441 | Power optimization using divide-and-conquer techniques for minimization of the number of operations. | We develop an approach to minimizing power consumption of portable wireless DSP applications using a set of compilation and architectural techniques. The key technical innovation is a novel divide-and-conquer compilation technique to minimize the number of operations for general DSP computations. Our technique optimizes not only a significantly wider set of computations than the previously published techniques, but also outperforms (or performs at least as well as other techniques) on all examples. Along the architectural dimension, we investigate coordinated impact of compilation techniques on the number of processors which provide optimal trade-off between cost and power. We demonstrate that proper compilation techniques can significantly reduce power with bounded hardware cost. The effectiveness of all techniques and algorithms is documented on numerous real-life designs. | INTRODUCTION
1.1 Motivation
The pace of progress in integrated circuits and system design has been dictated by
the push from application trends and the pull from technology improvements. The
goal and role of designers and design tool developers has been to develop design
methodologies, architectures, and synthesis tools which connect changing worlds of
applications and technologies.
A preliminary version of this paper was presented at the 1997 ACM/IEEE International Conference
on Computer-Aided Design, San Jose, California, November 10-13, 1997.
Authors' addresses: I. Hong and M. Potkonjak, Computer Science Department, University of Cali-
fornia, Los Angeles, CA 90095-1596; R. Karri, Department of Electrical & Computer Engineering,
University of Massachusetts, Amherst, MA 01003.
Permission to make digital or hard copies of part or all of this work for personal or classroom
use is granted without fee provided that copies are not made or distributed for profit or direct
commercial advantage and that copies show this notice on the first page or initial screen of a
display along with the full citation. Copyrights for components of this work owned by others than
ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish,
to post on servers, to redistribute to lists, or to use any component of this work in other works,
requires prior specific permission and/or a fee.
Recently, a new class of portable applications has been forming a new market
at an exceptionally high rate. The applications and products of portable wireless
market are defined by their intrinsic demand for portability, flexibility, cost sensitivity
and by their high digital signal processing (DSP) content [Schneiderman 1994].
Portability translates into the crucial importance of low power design, flexibility
results in a need for programmable platforms implementation, and cost sensitivity
narrows architectural alternatives to a uniprocessor or an architecture with a limited
number of off-the-shelf standard processors. The key optimization degree of
freedom for relaxing and satisfying this set of requirements comes from properties
of typical portable computations. The computations are mainly linear, but rarely
100 % linear due to either need for adaptive algorithms or nonlinear quantization
elements. Such computations are well suited for static compilation and intensive
quantitative optimization.
Two main recent relevant technological trends are reduced minimal feature size
and therefore reduced voltages of deep submicron technologies, and the introduction
of ultra low power technologies. In widely dominating digital CMOS technologies,
the power consumption is proportional to square of supply voltage (V dd ). The most
effective techniques try to reduce V dd while compensating for speed reduction using
a variety of architectural and compilation techniques [Singh et al. 1995].
The main limitation of the conventional technologies with respect to power minimization
is also related to V dd and threshold voltage (V t ). In traditional bulk silicon
technologies both voltages are commonly limited to range above 0.7V. However, in
the last few years ultra low power silicon on insulator (SOI) technologies, such as
SIMOX (SOI using separation of oxygen), bond and etchback SOI (BESOI) and
silicon-on-insulator-with-active-substrate (SOIAS), have reduced both V dd and V t
to well below 1V [El-Kareh et al. 1995; Ipposhi et al. 1995]. There are a number
of reported ICs which have values for V dd and V t in a range as low as 0.05V - 0.1V
[Chandrakasan et al. 1996].
Our goal in this paper is to develop a system of synthesis and compilation methods
and tools for realization of portable applications. Technically restated, the primary
goal is to develop techniques which efficiently and effectively compile typical
DSP wireless applications on single and multiple programmable processors assuming
both traditional bulk silicon and the newer SOI technologies. Furthermore, we
study achievable power-cost trade-offs when parallelism is traded for power reduction
on programmable platforms.
1.2 Design Methodology: What is New?
Our design methodology can be briefly described as follows. Given throughput,
power consumption and cost requirements for a computation, our goal is to find
cost-effective, power-efficient solutions on single or multiple programmable processor
platforms. The first step is to find power-efficient solutions for a single processor
implementation by applying the new technique described in Section 4. The second
step is to continue to add processors until the reduction in average power consumption
is not enough to justify the cost of an additional processor. This step
generates cost-effective and power efficient solutions. This straightforward design
methodology produces implementations with low cost and low power consumption
for the given design requirements.
The main technical innovation of the research presented in this paper is the first
approach for the minimization of the number of operations in arbitrary computa-
tions. The approach optimizes not only significantly wider set of computations than
the other previously published techniques [Parhi and Messerschmitt 1991; Srivastava
and Potkonjak 1996], but also outperforms or performs at least as well as other
techniques on all examples. The novel divide-and-conquer compilation procedure
combines and coordinates power and enabling effects of several transformations
(using a well organized ordering of transformations) to minimize the number of
operations in each logical partition. To the best of our knowledge this is the first
approach for minimization of the number of operations which in an optimization
intensive way treats general computations.
The second technical highlight is the quantitative analysis of cost vs power trade-off
on multiple programmable processor implementation platforms. We derive a
condition under which the optimization of the cost-power product using parallelization
is beneficial.
1.3 Paper Organization
The rest of the paper is organized in the following way. First, in the next sec-
tion, we summarize the relevant background material. In Section 3 we review the
related work on power estimation and optimization as well as on program optimization
using transformations, and in particular the minimization of the number
of operations. Sections 4 and 5 are the technical core of this paper and present
a novel approach for minimization of the number of operations for general DSP
computations and explore compiler and technology impact on power-cost trade-offs
of multiple processors-based low power application specific systems. We then
present comprehensive experimental results and their analysis in Section 6 followed
by conclusions in Section 7.
2. PRELIMINARIES
Before we delve into technical details of the new approach, we outline the relevant
preliminaries in this section. In particular, we describe application and computation
abstractions, selected implementation platform at the technology and architectural
level, and power estimation related background material.
2.1 Computational Model
We selected as our computational model synchronous data flow (SDF) [Lee and
Messerschmitt 1987; Lee and Parks 1995]. Synchronous data flow (SDF) is a special
case of data flow in which the number of data samples produced or consumed
by each node on each invocation is specified a priori. Nodes can be scheduled
statically at compile time onto programmable processors. We restrict our attention
to homogeneous SDF (HSDF), where each node consumes and produces exactly
one sample on every execution. The HSDF model is well suited for specification
of single task computations in numerous application domains such as digital signal
processing, video and image processing, broadband and wireless communications,
control, information and coding theory, and multimedia.
The syntax of a targeted computation is defined as a hierarchical control-data
flow graph (CDFG) [Rabaey et al. 1991]. The CDFG represents the computation as
a flow graph, with nodes, data edges, and control edges. The semantics underlying
the syntax of the CDFG format, as we already stated, is that of the synchronous
data flow computation model.
The only relevant speed metric is throughput, the rate at which the implementation
is capable of accepting and processing the input samples from two consecutive
iterations. We opted for throughput as the selected speed metric since in essentially
all DSP and communication wireless computations latency is not a limiting factor,
where latency is defined to be the delay between the arrival of a set of input samples
and the production of the corresponding output as defined by the specification.
2.2 Hardware Model
The basic building block of the targeted hardware platform is a single programmable
processor. We assume that all types of operations take one clock cycle for their
execution, as it is the case in many modern DSP processors. The adaptation of the
software and algorithms to other hardware timing models is straightforward. In the
case of a multi-processor, we make the following additional simplifying assumptions:
(i) all processors are homogeneous and (ii) inter-processor communication does
not cost any time and hardware. This assumption is reasonable because multiple
processors can be placed on single integrated circuit due to increased integration,
although it would be more realistic to assume additional hardware and delay penalty
for using multiple processors.
2.3 Power and Timing Models in Conventional and Ultra Low Power Technology
It is well known that there are three principal components of power consumption
in CMOS integrated circuits: switching power, short-circuit power, and leakage
power. The switching power is given by P switching
dd f clock , where ff is the
probability that the power consuming switching activity, i.e. transition from 0 to
1, occurs, CL is the loading capacitance, V dd is the supply voltage, and f clock is
the system clock frequency. ffC L is defined to be effective switched capacitance.
In CMOS technology, switching power dominates power consumption. The short-circuit
power consumption occurs when both NMOS and CMOS transistors are
"ON" at the same time while the leakage power consumption results from reverse
biased diode conduction and subthreshold operation. We assume that effective
switched capacitance increases linearly with the number of processors and supply
voltage can not be lowered below threshold voltage V t , for which we use several
different values between 0.06V and 1.1V for both conventional and ultra low power
technology.
It is also known that reduced voltage operation comes at the cost of reduced
throughput [Chandrakasan et al. 1992]. The clock speed T follows the following
is a constant [Chandrakasan et al. 1992]. The
maximum rate at which a circuit is clocked monotonically decreases as the voltage
is reduced. As the supply voltage is reduced close to V t , the rate of clock speed
reduction becomes higher.
2.4 Architecture-level Power Models for Single and Multiple Programmable Processors
The power model used in this research is built on three statistically validated and
experimentally established facts. The first fact is that the number of operations
at the machine code-level is proportional to the number of operations at high-level
language [Hoang and Rabaey 1993]. The second fact is that the power consumption
in modern programmable processors such as the Fujitsu SPARClite MB86934,
a 32-bit RISC microcontroller, is directly proportional to the number of operations,
regardless of the mix of operations being executed [Tiwari and Lee 1995]. Tiwari
and Lee [1995] report that all the operations including integer ALU instructions,
floating point instructions, and load/store instructions with locked caches incur
similar power consumption. Since the use of memory operands results in additional
power overhead due to the possibility of cache misses, we assume that the cache
locking feature is exploited as far as possible. If the cache locking feature can not
be used for the target applications, the power consumption by memory traffic is
likely to be reduced by the minimization of the number of operations since less operations
usually imply less memory traffic. When the power consumption depends
on the mix of operations being executed as in the case of the Intel 486DX2 [Tiwari
et al. 1994], more detailed hardware power model may be needed. However, it is
obvious that in all proposed power models for programmable processors, significant
reduction in the number of operations inevitably results in lower power. The final
empirical observation is related to power consumption and timing models in digital
CMOS circuits presented in the previous subsection.
Based on these three observations, we conclude that if the targeted implementation
platform is a single programmable CMOS processor, a reduction in the number
of operations is the key to power minimization. When the initial number of operations
is N init , the optimized number of operations is N opt , the initial voltage
is V init and the scaled voltage is V opt , the optimized power consumption relative
to the initial power consumption is ( Vopt
. For multiprocessors, assuming
that there is no communication overhead, the optimized power consumption for n
processors relative to that for a single processor is ( Vn
Vn are the
scaled voltages for single and n processors, respectively.
3. RELATED WORK
The related work can be classified along two lines: low power and implementation
optimization, and in particular minimization of the number of operations using
transformations. The relevant low power topics can be further divided in three
directions: power minimization techniques, power estimation techniques, and technologies
for ultra low power design. The relevant compilation techniques are also
grouped in three directions: transformations, ordering of transformations, and minimization
of the number of operations.
In the last five years power minimization has been arguably the most popular
optimization goal. This is mainly due to the impact of the rapidly growing market
for portable computation and communication products. Power minimization efforts
across all level of design abstraction process are surveyed in [Singh et al. 1995].
It is apparent that the greatest potential for power reduction is at the highest
levels (behavioral and algorithmic). Chandrakasan et al. [1992] demonstrated the
effectiveness of transformations by showing an order of magnitude reduction in several
DSP computationally intensive examples using a simulated annealing-based
transformational script. Raghunathan and Jha [1994] and Goodby et al. [1994]
also proposed methods for power minimization which explore trade-offs between
voltage scaling, throughput, and power. Chatterjee and Roy [1994] targeted power
reduction in fully hardwired designs by minimizing the switching activity. Chandrakasan
et. al. [1994], and Tiwari et. al. [1994] did work in power minimization
when programmable platforms are targeted.
Numerous power modeling techniques have been proposed at all levels of abstraction
in the synthesis process. As documented in [Singh et al. 1995] while there have
been numerous efforts at the gate level, at the higher level of abstraction relatively
few efforts have been reported.
Chandrakasan et al. [1995] developed a statistical technique for power estimation
from the behavioral level which takes into account all components at the layout level
including interconnect. Landman and Rabaey [1996] developed activity-sensitive
architectural power analysis approach for execution units in ASIC designs. Finally,
in a series of papers it has been established that the power consumption of modern
programmable processor is directly proportional to the number of operations,
regardless of what the mix of operations being executed is [Lee et al. 1996; Tiwari
et al. 1994].
Transformations have been widely used at all levels of abstraction in the synthesis
process, e.g. [Dey et al. 1992]. However, there is a strong experimental evidence
that they are most effective at the highest levels of abstractions, such as system
and in particular behavioral synthesis. Transformations only received widespread
attention in high level synthesis [Ku and Micheli 1992; Potkonjak and Rabaey 1992;
Walker and Camposano 1991].
Comprehensive reviews of use of transformations in parallelizing compilers, state-
of-the-art general purpose computing environments, and VLSI DSP design are given
in [Banerjee et al. 1993], [Bacon et al. 1994], and [Parhi 1995] respectively. The
approaches for transformation ordering can be classified in seven groups: local
(peephole) optimization, static scripts, exhaustive search-based "generate and test"
methods, algebraic approaches, probabilistic search techniques, bottleneck removal
methods, and enabling-effect based techniques.
Probably the most widely used technique for ordering transformations is local
(peephole) optimization [Tanenbaum et al. 1982], where a compiler considers only
a small section of code at a time in order to apply one by one iteratively and locally
all available transformations. The advantages of the approach are that it is fast and
simple to implement. However, performance are rarely high, and usually inferior
to other approaches.
Another popular technique is a static approach to transformations ordering where
their order is given a priori, most often in the form of a script [Ullman 1989]. Script
development is based on experience of the compiler/synthesis software developer.
This method has at least three drawbacks: it is a time consuming process which
involves a lot of experimentation on random examples in an ad-hoc manner, any
knowledge about the relationship among transformations is only implicitly used,
and the quality of the solution is often relatively low for programs/design which
have different characteristics than the ones used for the development of the script.
The most powerful approach to transformation ordering is enumeration-based
"generate and test" [Massalin 1987]. All possible combinations of transformations
are considered for a particular compilation and the best one is selected using branch-
7and-bound or dynamic programming algorithms. The drawback is the large run
time, often exponential in the number of transformations.
Another interesting approach is to use a mathematical theory behind the ordering
of some transformations. However, this method is limited to only several linear loop
transformations [Wolf and Lam 1991]. Simulated annealing, genetic programming,
and other probabilistic techniques in many situations provide a good trade-off between
the run time and the quality of solution when little or no information about
the topology of the solution space is available. Recently, several probabilistic search
techniques have been proposed for ordering of transformations in both compiler and
behavioral synthesis literature. For example, backward-propagation-based neural
network techniques were used for developing a probabilistic approach to the application
of transformations in compilers for parallel computers [Fox and Koller
1989] and approaches which combine both simulated annealing-based probabilistic
and local heuristic optimization mechanism were used to demonstrate significant
reductions in area and power [Chandrakasan et al. 1995].
In behavioral and logic synthesis several bottleneck identification and elimination
approaches for ordering of transformations have been proposed [Dey et al. 1992;
Iqbal et al. 1993]. This line of work has been mainly addressing the throughput
and latency optimization problems, where the bottlenecks can be easily identified
and well quantified. Finally, the idea of enabling and disabling transformations has
been recently explored in a number of compilation [Whitfield and Soffa 1990] and
high level synthesis papers [Potkonjak and Rabaey 1992; Srivastava and Potkonjak
1996]. Using this idea several very powerful transformations scripts have been
developed, such as one for maximally and arbitrarily fast implementation of linear
computations [Potkonjak and Rabaey 1992], and joint optimization of latency and
throughput for linear computations [Srivastava and Potkonjak 1994]. Also, the
enabling mechanism has been used as a basis for several approaches for ordering of
transformations for optimization of general computations [Huang and Rabaey 1994].
The key advantage of this class of approaches is related to intrinsic importance and
power of enabling/disabling relationship between a pair of transformations.
Transformations have been used for optimization of a variety of design and program
metrics, such as throughput, latency, area, power, permanent and temporal
fault-tolerance, and testability. Interestingly, the power of transformations is most
often focused on secondary metrics such as parallelism, instead on the primary
metrics such as the number of operations.
In compiler domain, constant and copy propagation and common subexpression
techniques are often used. It can be easily shown that the constant propagation
problem is undecidable, when the computation has conditionals [Kam and Ullman
1977]. The standard procedure to address this problem is to use so called conservative
algorithms. Those algorithms do not guarantee that all constants will be
detected, but that each data declared constant is indeed constant over all possible
executions of the program. A comprehensive survey of the most popular constant
propagation algorithms can be found in [Wegman and Zadeck 1991].
Parhi and Messerschmitt [1991] presented optimal unfolding of linear computations
in DSP systems. Unfolding results in simultaneous processing of consecutive
iterations of a computation. Potkonjak and Rabaey [1992] addressed the minimization
of the number of multiplications and additions in linear computations in
their maximally fast form so that the throughput is preserved. Potkonjak et al.
[1996] presented a set of techniques for minimization of the number of shifts and
additions in linear computations. Sheliga and Sha [1994] presented an approach for
minimization of the number of multiplications and additions in linear computations.
Srivastava and Potkonjak [1996] developed an approach for the minimization of
the number of operations in linear computations using unfolding and the application
of the maximally fast procedure. A variant of their technique is used in "conquer"
phase of our approach. Our approach is different from theirs in two respects.
First, their technique can handle only very restricted computations which are linear,
while our approach can optimize arbitrary computations. Second, our approach
outperforms or performs at least as well as their technique for linear computations.
4. SINGLE PROGRAMMABLE PROCESSOR IMPLEMENTATION: MINIMIZING THE
NUMBER OF OPERATIONS
The global flow of the approach is presented in subsection 4.1. The strategy is based
on divide-and-conquer optimization followed by post optimization step, merging of
divided sub parts which is explained in subsection 4.2. Finally, subsection 4.3
provides a comprehensive example to illustrate the strategy.
4.1 Global Flow Of the Approach
The core of the approach is presented in the pseudo-code of Figure 1. The rest of
this subsection explains the global flow of the approach in more detail.
Decompose a computation into strongly connected components(SCCs);
Any adjacent trivial SCCs are merged into a sub part;
Use pipelining to isolate the sub parts;
For each sub part
Minimize the number of delays using retiming;
If (the sub part is linear)
Apply optimal unfolding;
Else
Apply unfolding after the isolation of nonlinear operations;
Merge linear sub parts to further optimize;
Schedule merged sub parts to minimize memory usage;
Fig. 1. The core of the approach to minimize the number of operations for general DSP computation
The first step of the approach is to identify the computation's strongly connected
components(SCCs), using the standard depth-first search-based algorithm [Tarjan
1972] which has a low order polynomial-time complexity. For any pair of operations
A and B within an SCC, there exist both a path from A to B and a path from B
to A. An illustrated example of this step is shown in Figure 2. The graph formed
by all the SCCs is acyclic. Thus, the SCCs can be isolated from each other using
pipeline delays, which enables us to optimize each SCC separately. The inserted
pipeline delays are treated as inputs or outputs to the SCC. As a result, every
output and state in an SCC depend only on the inputs and states of the SCC.
Addition
Constant Multiplication
Functional Delay (State)
Variable Multiplication
Strongly Connected Component
Fig. 2. An illustrated example of the SCC decomposition step
Thus, in this sense, the SCC is isolated from the rest of the computation and it
can be optimized separately. In a number of situations our technique is capable
to partition a nonlinear computation into partitions which consist of only linear
computations. Consider for example a computation which consists of two strongly
connected components SCC 1 and SCC 2 . SCC 1 has as operations only additions
and multiplications with constants. SCC 2 has as operations only max operation
and additions. Obviously, since the computations has additions, multiplications
with constants and max operations, it is nonlinear. However, after applying our
technique of logical separation using pipeline states we have two parts which are
linear. Note that this isolation is not affected by unfolding. We define an SCC
with only one node as a trivial SCC. For trivial SCCs unfolding fails to reduce the
number of operations. Thus, any adjacent trivial SCCs are merged together before
the isolation step, to reduce the number of pipeline delays used.
where X, Y , and S are the input, output, and state vectors respectively
and A; B; C; and D are constant coefficient matrices.
Fig. 3. State-space equations for linear computations
The number of delays in each sub part is minimized using retiming in polynomial
time by the Leiserson-Saxe algorithm [Leiserson and Saxe 1991]. Note that smaller
number of delays will require smaller number of operations since both the next
states and outputs depend on the previous states. SCCs are further classified
as either linear or nonlinear. Linear computations can be represented using the
(2R\Gamma1)R
(2R\Gamma1)R
which gives the smaller value of
of multiplications for i times unfolded system
of additions for i times unfolded system
Fig. 4. Closed-form formula of unfolding for dense linear computation with P inputs, Q outputs,
and R states.
state-space equations in Figure 3. Minimization of the number of operations for
linear computations is NP-complete [Sheliga and Sha 1994]. We have adopted an
approach of [Srivastava and Potkonjak 1996] for the optimization of linear sub parts,
which uses unfolding and the maximally fast procedure [Potkonjak and Rabaey
1992]. We note that instead of maximally fast procedure, the ratio analysis by
[Sheliga and Sha 1994] can be used. [Srivastava and Potkonjak 1996] has provided
the closed-form formula for the optimal unfolding factor with the assumption of
dense linear computations. We provide the formula in Figure 4. For sparse linear
computations, they have proposed a heuristic which continues to unfold until there
is no improvement. We have made the simple heuristic more efficient with binary
search, based on the unimodality property of the number of operations on unfolding
factor [Srivastava and Potkonjak 1996].
iteration i iteration i+1 iteration i+2
Fig. 5. An example of isolating nonlinear operations from 2 times unfolded nonlinear sub part
When a sub part is classified as nonlinear, we apply unfolding after the isolation of
nonlinear operations. All nonlinear operations are isolated from the sub part so that
the remaining linear sub parts can be optimized by the maximally fast procedure.
All arcs from nonlinear operations to the linear sub parts are considered as inputs
to the linear sub parts, and all arcs from linear sub parts to the nonlinear operations
are considered as outputs from the linear sub parts. The process is illustrated in
Figure
5. All arcs denoted by i are considered to be inputs and all arcs denoted
by are considered to be outputs for unfolded linear sub part. We observe that if
every output and state of the nonlinear sub part depend on nonlinear operations,
then unfolding with the separation of nonlinear operations is ineffective in reducing
the number of operations.
Fig. 6. A motivational example for sub part merging
Sometimes it is beneficial to decompose a computation into larger sub parts than
SCCs. We consider an example given in Figure 6. Each node represents a sub
part of the computation. We make the following assumptions only specifically for
clarifying the presentation of this example and simplifying the example. We stress
here that the assumptions are not necessary for our approach. Assume that each
sub part is linear and can be represented by state-space equations in Figure 3. Also
assume that every sub part is dense, which means that every output and state in
a sub part are linear combinations of all inputs and states in the sub part with
no 0, 1, or -1 coefficients. The number inside a node is the number of delays or
states in the sub part. Assume that when there is an arc from a sub part X to
a sub part Y, every output and state of Y depends on all inputs and states of X.
Separately optimizing SCCs P 1 and P 2 in Figure 6 costs 211 operations from the
formula in Figure 4. On the otherhand, optimizing the entire computation entails
only 63.67 operations. The reason why separate optimization does not perform well
in this example is because there are too many intermediate outputs from SCC P 1 to
This observation leads us to an approach of merging sub parts for further
reducing the number of operations. Since it is worthwhile to explain the sub part
merging problem in detail, the next subsection is devoted to the explanation of the
problem and our heuristic approaches.
Since the sub parts of a computation are unfolded separately by different unfolding
factors, we need to address the problem of scheduling the sub parts. They should
be scheduled so that memory requirements for code and data are minimized. We
observe that the unfolded sub parts can be represented by multi-rate synchronous
dataflow graph [Lee and Messerschmitt 1987] and the work of [Bhattacharyya et al.
1993] can be directly used.
Note that the approach is in particular useful for such architectures that require
high locality and regularity in computation because it improves both locality and
regularity of computation by decomposing into sub parts and using the maximally
fast procedure. Locality in a computation relates to the degree to which a computation
has natural clusters of operations while regularity in a computation refers
to the repeated occurrence of the computational patterns such as a multiplication
followed by an addition [Guerra et al. 1994; Mehra and Rabaey 1996].
4.2 Subpart Merging
Initially, we only consider merging of linear SCCs. When two SCCs are merged,
the resulting sub part does not form an SCC. Thus, in general, we must consider
merging of any adjacent arbitrary sub parts. Suppose we consider merging of sub
parts i and j. The gain GAIN(i; j) of merging sub parts i and j can be computed as
follows: is the
number of operations for sub part i and COST (i; j) is the number of operations
for the merged sub part of i and j. To compute the gain, COST (i; j) must be
computed, which requires constant coefficient matrices A; B; C; and D for only the
merged sub part of i and j. It is easy to construct the matrices using the depth-first
search [Tarjan 1972].
Fig. 7. i times unfolded state-space equations
\Gamma 1c, or d
which gives the smaller value of i opt (
states in state group j
outputs in output group j
inputs that output group j depends on
inputs that state group j depends on
states that output group j depends on
states that state group j depends on
Fig. 8. Closed-form formula for unfolding; If two outputs depend on the same set of inputs and
states, they are in the same group, and the same is true for states.
The i times unfolded system can be represented by the state-space equations in
Figure
7. From the equations, the total number of operations can be computed
for i times unfolded sub part as follows. Let denote the number
of multiplications and the number of additions for i times unfolded system,
respectively. The resulting number of operations is N( ;i)+N(+;i)
because i times
unfolded system uses a batch of samples to generate a batch of
output samples. We continue to unfold until no improvement is achieved. If there
are no coefficients of 1 or \Gamma1 in the matrices A, B, C, and D, then the closed-form
formula for the optimal unfolding factor i opt and for the number of operations for
times unfolded system are provided in Figure 8.
While (there is improvement)
For all possible merging candidates,
Compute the gain;
Merge the pair with the highest gain;
Fig. 9. Pseudo-code of a greedy heuristic for sub part merging
Generate a starting solution S.
Set the best solution S
Determine a starting temperature T .
While not yet frozen,
While not yet at equilibrium for the current temperature,
Choose a random neighbor S 0 of the current solution.
Else
Generate a random number r uniformly from [0, 1].
Update the temperature T .
Return the best solution S .
Fig. 10. Pseudo-code for simulated annealing algorithm for sub part merging
Now, we can evaluate possible merging candidates. We propose two heuristic
algorithms for sub part merging. The first heuristic is based on greedy optimization
approach. The pseudo-code is provided in Figure 9. The algorithm is simple. Until
there is no improvement, merge the pair of sub parts which produces the highest
gain.
The other heuristic algorithm is based on a general combinatorial optimization
technique known as simulated annealing [Kirkpatrick et al. 1983]. The pseudo-code
is provided in Figure 10. The actual implementation details are presented
for each of the following areas: the cost function, the neighbor solution generation,
the temperature update function, the equilibrium criterion and the frozen criterion.
Firstly, the number of operations for the entire given computation has been used
as the cost function. Secondly, the neighbor solution is generated by the merging
of two adjacent sub parts. Thirdly, the temperature is updated by the function
old . For the temperature T ? 200:0, ff is chosen to be 0.1 so
that in high temperature regime where every new state has very high chance of
acceptance, the temperature reduction occurs very rapidly. For
is set to 0.95 so that the optimization process explores this promising region more
slowly. For T 1:0, ff is set to 0.8 so that T is quickly reduced to converge to a local
minimum. The initial temperature is set to 4,000,000. Fourthly, the equilibrium
criterion is specified by the number of iterations of the inner loop. The number of
Fig. 11. An explanatory example
iterations of the inner loop is set to 20 times of the number of sub parts. Lastly,
the frozen criterion is given by the temperature. If the temperature falls below 0.1,
the simulated annealing algorithm stops.
Both heuristics performed equally well on all the examples and the run times
for both are very small because the examples have a few sub parts. We have used
both greedy and simulated annealing based heuristics for generating experimental
results and they produced exactly the same results.
Fig. 12. A simple example of # operations calculation
4.3 Explanatory Example: Putting It All Together
We illustrate the key ideas of our approach for minimizing the number of operations
by considering the computation of Figure 11. We use the same assumptions made
for the example in Figure 6.
The number of operations per input sample is initially 2081 (We illustrate how
the number of operations is calculated in a maximally fast way [Potkonjak and
Rabaey 1992] using a simple linear computation with 1 input X , 1 output Y , and
which is described in Figure 12). Using the technique of [Srivastava
and Potkonjak 1996] which unfolds the entire computation, the number can be
reduced to 725 with an unfolding factor of 12. Our approach optimizes each sub
part separately. This separate optimization is enabled by isolating the sub parts
using pipeline delays. Figure 13 shows the computation after the isolation step.
Since every sub part is linear, unfolding is performed to optimize the number of
operations for each sub part. The sub parts cost 120.75, 53.91,
114.86, 129.75, and 103.0 operations per input sample with unfolding factors 3, 10,
6, 7, and 2, respectively. The total number of operations per input sample for
the entire computation is 522.27. We now apply SCC merging to further reduce
the number of operations. We first consider the greedy heuristic. The heuristic
Fig. 13. A motivational example after the isolation step
considers merging of adjacent sub parts. Initially, the possible merging candidate
are which produce the gains of -51.48, -
112.06, -52.38, 122.87, and -114.92, respectively. SCC P 3 and SCC P 4 are merged
with an unfolding factor of 22. In the next iteration, there are now 4 sub parts and
4 candidate pairs for merging all of which yield negative gains. So, the heuristic
stops at this point. The total number of operations per input sample has further
decreased to 399.4. Simulated annealing heuristic produced the same solution for
this example. The approach has reduced the number of operations by a factor of
1.82 from the previous technique of [Srivastava and Potkonjak 1996], while it has
achieved the reduction by a factor of 5.2 from the initial number of operations.
For single processor implementation, since both the technique of [Srivastava and
Potkonjak 1996] and our new method yield higher throughput than the original,
the supply voltage can be lowered up to the extent that the extra throughput is
compensated by the loss in circuit speed due to reduced voltage. If the initial
voltage is 3.3V, then our technique reduces power consumption by a factor of 26.0
with the supply voltage of 1.48V while the technique of [Srivastava and Potkonjak
1996] reduces it by a factor of 10.0 with the supply voltage of 1.77V.
The scheduling of the unfolded sub parts is performed to generate the minimum
code and data memory schedule. The schedule period is the least common multiple
of (the unfolding factor+1)'s which is 3036. Let P 3;4 denote the merged sub part of
While a simple minded schedule (759P 1 , 276P 2 , 132P 3;4 , 1012P 5 ) to
minimize the code size ignoring loop overheads generates 9108 units of data memory
requirement, a schedule (759P 1 , 4(69P 2 , 33P 3;4 , 253P 5 which minimizes the data
memory requirement among the schedules minimizing the code size generates 4554
units of data memory requirement.
5. MULTIPLE PROGRAMMABLE PROCESSORS IMPLEMENTATION
When multiple programmable processors are used, potentially more savings in
power consumption can be obtained. We summarize the assumptions made in
Section 2: (i) processors are homogeneous, (ii) inter-processor communication does
not cost any time and hardware, (iii) effective switched capacitance increases linearly
with the number of processors, (iv) both addition and multiplication take one
clock cycle, and (v) supply voltage can not be lowered below threshold voltage V t ,
for which we use several different values between 0.06V and 1.1V. Based on these
assumptions, using k processors increases the throughput k times when there is
enough parallelism in the computation, while the effective switched capacitance increases
k times as well. In all the real-life examples considered, sufficient parallelism
actually existed for the numbers of processors that we used.
R )e , if k R 2
Fig. 14. Closed-form condition for sufficient parallelism when using k processors for a dense linear
computation with R states
We observe that the next states, i.e., the feedback loops can be
computed in parallel. Note that the maximally fast procedure by [Potkonjak and
Rabaey 1992] evaluates a linear computation by first doing the constant-variable
multiplications in parallel, and then organizing the additions as a maximally balanced
binary tree. Since all the next states are computed in a maximally fast
procedure, in the bottom of the binary computation tree there exists more paral-
lelism. All other operations not in the feedback loops can be computed in parallel
because they can be separated by pipeline delays. As the number of processors
becomes larger, the number of operations outside the feedback loops gets larger to
result in more parallelism. For dense linear computations, we provide the closed-form
condition for sufficient parallelism when using k processors in Figure 14. We
note that although the formulae were derived for the worst case scenario, the required
number of operations outside the feedback loops is small for the range of
the number of processors that we have tried in the experiment. There exist more
operations outside feedback loops than are required for full parallelism in all the
real-life examples we have considered.
Now one can reduce the voltage so that the clock frequency of all k processors
is reduced by a factor of k. The average power consumption of k processors
is reduced from that of a single processor by a factor of ( V1
is a
scaled supply voltage for k processor implementation, and V k satisfies the equation
et al. 1992]. From this observation it is
always beneficial to use more processors in terms of power consumption with the
following two limitations: (i) the amount of parallelism available limits the improvement
in throughput and the critical path of the computation is the maximum
achievable throughput and (ii) when supply voltage approaches close to threshold
voltage, the improvement in power consumption becomes so small that the cost of
adding a processor is not justified. With this in mind, we want to find the number
of processors which minimizes power consumption cost-effectively in both standard
CMOS technology and ultra low power technology.
Since the cost of programmable processors is high, and especially the cost of
processors on ultra low power platforms such as SOI is very high [El-Kareh et al.
1995; Ipposhi et al. 1995], the guidances for cost-effective design are important. We
need a measure to differentiate between cost-effective and cost-ineffective solutions.
We propose a PN product, where P is the power consumption normalized to that
of optimized single processor implementation and N is the number of processors
number of processors
1.1 5.0 5.0 0.90 0.93 0.98 1.04 1.10 1.17 1.24 1.30 1.37
4.0 1.00 1.08 1.17 1.28 1.38 1.49 1.59 1.70 1.80
3.0 1.14 1.33 1.51 1.70 1.88 2.06 2.24 2.42 2.60
2.0 1.42 1.82 2.22 2.60 2.98 3.35 3.71 4.08 4.44
0.7 3.3 3.3 0.89 0.91 0.95 1.00 1.06 1.12 1.19 1.25 1.31
2.0 1.12 1.28 1.45 1.62 1.79 1.96 2.12 2.28 2.45
1.0 1.63 2.22 2.80 3.38 3.94 4.50 5.05 5.60 6.15
0.3 1.3 1.3 0.92 0.96 1.02 1.08 1.16 1.23 1.30 1.38 1.45
0.7 1.24 1.49 1.75 2.00 2.24 2.48 2.72 2.95 3.19
Table
I. The values of PN products with respect to the number of processors for various combinations
of the initial voltage V init , the scaled voltage for single processor V 1 , and the threshold
used. The smaller the PN product is the more cost-effective the solution is. If PN
is smaller than 1.0, using N processors has decreased the power consumption by
a factor of more than N. It depends on the power consumption requirement and
the cost budget for the implementation how many processors the implementation
should use. Table I provides the values of PN products with respect to the number
of processors used for various combinations of the initial voltage V init , the scaled
voltage for single processor V 1 , and the threshold voltage V t . V init is the initial
voltage for the implementation before optimization. We note that PN products
monotonically increase with respect to the number of processors.
Init. New IF From RP From IF From RP From
Design Ops [Sri96] Method [Sri96] [Sri96] Init. Ops Init. Ops
dist 48 47.3 36.4 1.30 23.0 1.32 24.2
chemical
modem 213 213 148.83 1.43 30.1 1.43 30.1
GE controller 180 180 105.26 1.71 41.5 1.71 41.5
APCM receiver 2238 N/A 1444.19 N/A N/A 1.55 35.4
Audio Filter 1 154 N/A 76.0 N/A N/A 2.03 50.7
Audio Filter 2 228 N/A 92.0 N/A N/A 2.48 59.7
Filter 1 296 N/A 157.14 N/A N/A 1.88 46.8
Filter 2 398 N/A 184.5 N/A N/A 2.16 53.7
Table
II. Minimizing the number of operations for real-life examples; IF - Improvement Factor,
Reduction Percentage, N/A - Not Applicable, [Sri96] - [Srivastava and Potkonjak 1996]
From the Table I, we observe that cost effective solutions usually use a few
processors in all the cases considered on both the standard CMOS and ultra low
power platforms. We also observe that if the voltage reduction is high for single
Vnew PRF
dist 5.0 1.1 3.76 2.33
3.3 0.7 2.70 1.96
1.3 0.3 1.10 1.84
chemical 5.0 1.1 3.61 2.65
3.3 0.7 2.61 2.21
1.3 0.3 1.07 2.04
DAC 5.0 1.1 3.81 2.72
3.3 0.7 2.50 2.75
1.3 0.3 1.00 2.69
modem 5.0 1.1 4.02 2.21
3.3 0.7 2.65 2.23
1.3 0.3 1.05 2.19
GE controller 5.0 1.1 3.65 3.21
3.3 0.7 2.39 3.25
1.3 0.3 0.96 3.16
Table
III. Minimizing power consumption on single programmable processor for linear examples;
Reduction Factor
processor case, then there is not much room to further reduce power consumption
by using more processors.
Based on those observations, we have developed our strategy for the multiple
processor implementation. The first step is to minimize power consumption for
single processor implementation using the proposed technique in Section 4. The
second step is to increase the number of processors until the PN product is below
the given maximum value. The maximum value is determined based on the power
consumption requirement and the cost budget for the implementation. The strategy
produces solutions with only a few processors, in many cases single processor for
all the real-life examples because our method for the minimization of the number
of operations significantly reduces the number of operations and in turn the supply
voltage for the single processor implementation, adding more processors does
not usually reduce power consumption cost-effectively. Our method achieves cost-effective
solutions with very low power penalty compared to the solutions which
only optimize power consumption without considering hardware cost.
6. EXPERIMENTAL RESULTS
Our set of benchmark designs include all the benchmark examples used in [Sri-
vastava and Potkonjak 1996] as well as the following typical portable DSP, video,
communication, and control applications: DAC - 4 stage NEC digital to analog
converter (DAC) for audio signals; modem - 2 stage NEC modem; GE controller
- 5-state GE linear controller; APCM receiver - Motorola's adaptive pulse code
Vnew PRF
APCM receiver 5.0 1.1 3.85 2.62
3.3 0.7 2.53 2.64
1.3 0.3 1.01 2.58
Audio Filter 1 5.0 1.1 3.34 4.54
3.3 0.7 2.03 4.43
1.3 0.3 0.88 4.45
Audio Filter 2 5.0 1.1 3.03 6.76
3.3 0.7 1.85 6.54
1.3 0.3 0.8 6.58
Filter 1 5.0 1.1 3.45 3.97
3.3 0.7 2.26 4.03
1.3 0.3 0.91 3.90
Filter 2 5.0 1.1 3.24 5.15
3.3 0.7 2.12 5.24
1.3 0.3 0.85 5.04
VSTOL 5.0 1.1 3.43 4.10
3.3 0.7 2.25 4.15
1.3 0.3 0.90 4.02
Table
IV. Minimizing power consumption on single programmable processor for nonlinear ex-
amples; PRF - Power Reduction Factor
modulation receiver; Audio Filter 1 - analog to digital converter (ADC) followed
by 14 order cascade IIR filter; Audio Filter 2 - ADC followed by
two ADCs followed by 10-order two dimensional (2D) IIR
two ADCs followed by 12-order 2D IIR filter; and VSTOL -
VSTOL robust observer structure aircraft speed controller. DAC, modem, and GE
controller are linear computations and the rest are nonlinear computations. The
benchmark examples from [Srivastava and Potkonjak 1996] are all linear, which
include ellip, iir5, wdf5, iir6, iir10, iir12, steam, dist, and chemical.
Table
II presents the experimental results of our technique for minimizing the
number of operations for real-life examples. The fifth and seventh columns of Table
II provide the improvement factors of our method from that of [Srivastava
and Potkonjak 1996] and from the initial number of operations, respectively. Our
method has achieved the same number of operations as that of [Srivastava and
Potkonjak 1996] for ellip, iir5, wdf5, iir6, iir10, iir12, and steam while it has reduced
the number of operations by 23 and 10.3 % for dist and chemical, respectively. All
the examples from [Srivastava and Potkonjak 1996] are single-input single-output
linear computations, except dist and chemical which are two-inputs single-output
linear computations. Since the SISO linear computations are very small,
Vnew PRF N PRF N PRF N PRF
dist 5.0 1.1 3.76 2.33 1 2.33 4 7.53 6 9.47
3.3 0.7 2.70 1.96 2 4.04 5 8.11 8 10.54
1.3 0.3 1.10 1.84 2 3.71 4 6.31 7 8.74
chemical 5.0 1.1 3.61 2.65 1 2.65 3 6.87 5 9.39
3.3 0.7 2.61 2.21 2 4.49 5 8.86 7 10.69
1.3 0.3 1.07 2.04 1 2.04 4 6.83 6 8.67
3.3 0.7 2.50 2.75 1 2.75 4 9.22 6 11.71
1.3 0.3 1.00 2.69 1 2.69 3 7.05 5 9.67
modem 5.0 1.1 4.02 2.21 2 4.45 4 7.56 7 10.45
3.3 0.7 2.65 2.23 2 4.56 5 9.07 7 10.97
1.3 0.3 1.05 2.19 1 2.19 4 7.22 6 9.13
GE 5.0 1.1 3.65 3.21 1 3.21 3 8.39 5 11.49
controller 3.3 0.7 2.39 3.25 1 3.25 4 10.49 6 13.20
1.3 0.3 0.96 3.16 1 3.16 3 8.05 5 10.92
Table
V. Minimizing power consumption on multiple processors for linear examples; PN T -
threshold PN product, N - # of processors, PRF - Power Reduction Factor
there is no room for improvement from [Srivastava and Potkonjak 1996]. Our
method has reduced the number of operations by an average factor of 1.77 (average
43.5 %) for the examples that previous techniques are either ineffective or inap-
plicable. Tables III and IV present the experimental results of our technique for
minimizing power consumption on single programmable processor of real-life examples
on various technologies. Our method results in power consumption reduction
by an average factor of 3.58.
For multiple processor implementations, Tables V and VI summarize the experimental
results of our technique for minimizing power consumption. We define
threshold PN product PN T to be the value of PN product, where we should stop
increasing the number of processors. When PN i.e., the power reduction by
the addition of a processor must be greater than 2 to be cost effective, in almost
all cases single processor solution is optimum. When PN T gets larger, the number
of processors used increases, but the solutions still use only a few processors which
result in an order of magnitude reduction in power consumption. All the results
clearly indicate the effectiveness of our new method.
7. CONCLUSION
We introduced an approach for power minimization using a set of compilation and
architectural techniques. The key technical innovation is a compilation technique
for minimization of the number of operations which synergistically uses several
Vnew PRF N PRF N PRF N PRF
APCM 5.0 1.1 3.85 2.62 1 2.62 4 8.64 6 10.92
receiver 3.3 0.7 2.53 2.64 2 5.29 4 8.95 7 12.33
1.3 0.3 1.01 2.58 1 2.58 3 6.81 6 10.32
Audio 5.0 1.1 3.34 4.54 1 4.54 3 11.12 4 13.22
Filter 1 3.3 0.7 2.03 4.43 1 4.43 2 7.99 4 12.38
1.3 0.3 0.88 4.45 1 4.45 2 8.07 4 12.56
Audio 5.0 1.1 3.03 6.76 1 6.76 2 11.88 4 18.04
Filter 2 3.3 0.7 1.85 6.54 1 6.54 2 11.26 3 14.45
1.3 0.3 0.8 6.58 1 6.58 2 11.38 3 14.64
Filter 1 3.3 0.7 2.26 4.03 1 4.03 3 10.32 5 14.04
1.3 0.3 0.91 3.90 1 3.90 3 9.55 4 11.34
Filter 2 3.3 0.7 2.12 5.24 1 5.24 3 12.82 4 15.22
1.3 0.3 0.85 5.04 1 5.04 2 8.99 4 13.79
3.3 0.7 2.25 4.15 1 4.15 3 10.60 5 14.40
1.3 0.3 0.90 4.02 1 4.02 3 9.76 4 11.58
Table
VI. Minimizing power consumption on multiple processors for nonlinear examples; PN T
threshold PN product, N - # of processors, PRF - Power Reduction Factor
transformations within a divide and conquer optimization framework. The new
approach not only deals with arbitrary computations, but also outperforms previous
techniques for limited computation types.
Furthermore, we investigated coordinated impact of compilation techniques and
new ultra low power technologies on the number processors which provide optimal
trade-off of cost and power. The experimental results on a number of real-life
designs clearly indicates the effectiveness of all proposed techniques and algorithms.
--R
Compiler transformations for high performance computing.
Automatic program parallelization.
A scheduling framework for minimizing memory requirements of multirate signal processing algorithms expressed as dataflow graphs.
Optimizing power using transformations.
Energy efficient programmable computation.
Design considerations and tools for low-voltage digital system design
Synthesis of low power DSP circuits using activity metrics.
Performance optimization of sequential circuits by eliminating retiming bottlenecks.
Silicon on insulator - an emerging high-leverage technology
Code generation by a generalized neural network.
Microarchitectural synthesis of performance-constrained
Scheduling of DSP programs onto multiprocessors for maximum throughput.
Maximizing the throughput of high performance DSP applications using behavioral transformations.
An advanced 0.5 mu m CMOS/SOI technology for practical ultrahigh-speed and low-power circuits
Critical path minimization using retiming and algebraic speedup.
Monotone data flow analysis frameworks.
Optimization by simulated annealing.
Synchronous dataflow.
Dataflow process networks.
Power analysis and minimization techniques for embedded dsp software.
Retiming synchronous circuitry.
A look at the smallest program.
Exploiting regularity for low-power design
Journal of VLSI Signal Processing
Static rate-optimal scheduling of iterative data-flow programs via optimum unfolding
Maximally fast and arbitrarily fast implementation of linear computations.
Multiple constant multi- plications: efficient and versatile framework and algorithms for exploring common subexpression elimination
Fast prototyping of data path intensive architectures.
Behavioral synthesis for low power.
Personal Communications.
Global node reduction of linear systems using ratio analysis.
Power conscious cad tools and methodologies: A perspective.
Transforming linear systems for joint latency and throughput optimization.
Power optimization in programmable processors and ASIC implementations of linear systems: Transformation-based approach
Using peephole optimization on intermediate code.
Depth first search and linear graph algorithms.
Power analysis of a 32-bit embedded microcontroller
In Asia and South Pacific Design Automation Conference
Power analysis of embedded software: a first step towards software power minimization.
Database and Knowledge-Base Systems
A Survey of High-level Synthesis Systems
ACM Transactions on Programming Languages
An approach to ordering optimizing transformations.
In ACM Symposium on Principles and Practice of Parallel Programming
A loop transformation theory and an algorithm to maximize parallelism.
--TR
Static Rate-Optimal Scheduling of Iterative Data-Flow Programs Via Optimum Unfolding
Power analysis of embedded software
Power optimization in programmable processors and ASIC implementations of linear systems
Global node reduction of linear systems using ratio analysis
Maximally fast and arbitrarily fast implementation of linear computations
Fast Prototyping of Datapath-Intensive Architectures
--CTR
Johnson Kin , Chunho Lee , William H. Mangione-Smith , Miodrag Potkonjak, Power efficient mediaprocessors: design space exploration, Proceedings of the 36th ACM/IEEE conference on Design automation, p.321-326, June 21-25, 1999, New Orleans, Louisiana, United States
Luca Benini , Giovanni De Micheli, System-level power optimization: techniques and tools, Proceedings of the 1999 international symposium on Low power electronics and design, p.288-293, August 16-17, 1999, San Diego, California, United States
Luca Benini , Giovanni de Micheli, System-level power optimization: techniques and tools, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.5 n.2, p.115-192, April 2000 | power consumption;data flow graphs;portable wireless DSP applications;architectural techniques;DSP computations;compilation;divide-and-conquer compilation |
266458 | Approximate timing analysis of combinational circuits under the XBD0 model. | This paper is concerned with approximate delay computation algorithms for combinational circuits. As a result of intensive research in the early 90's efficient tools exist which can analyze circuits of thousands of gates in a few minutes or even in seconds for many cases. However, the computation time of these tools is not so predictable since the internal engine of the analysis is either a SAT solver or a modified ATPG algorithm, both of which are just heuristic algorithms for an NP-complete problem. Although they are highly tuned for CAD applications, there exists a class of problem instances which exhibits the worst-case exponential CPU time behavior. In the context of timing analysis, circuits with a high amount of reconvergence, e.g. C6288 of the ISCAS benchmark suite, are known to be difficult to analyze under sophisticated delay models even with state-of-the-art techniques. For example [McGeer93] could not complete the analysis of C6288 under the mapped delay model. To make timing analysis of such corner case circuits feasible we propose an approximate computation scheme to the timing analysis problem as an extension to the exact analysis method proposed in [McGeer93]. Sensitization conditions are conservatively approximated in a selective fashion so that the size of SAT problems solved during analysis is controlled. Experimental results show that the approximation technique is effective in reducing the total analysis time without losing accuracy for the case where the exact approach takes much time or cannot complete. | Introduction
During late 80's and early 90's significant progress [2, 8] was made
in the theory of exact gate-level timing analysis. In this, false
paths are correctly identified so that exact delays can be computed.
As the theory progressed, the efficiency and size limitation of actual
implementations of timing analysis tools were dramatically
improved [3, 8]. Although state-of-the-art implementations can
handle circuits composed of thousands of gates under mapped delay
models, it is evident that the current size limitation is far from
satisfactory for analyzing industrial-strength circuits. Furthermore,
even if they can handle large circuits, the computation time is often
prohibitively large especially when delay models are elaborate.
To alleviate this problem several researchers have proposed approximate
timing analysis algorithms. The goal is to compute a
conservative yet accurate enough approximation of true delays in
less computation time to make analysis of large circuits tractable.
Huang et al. [4, 6] proposed, as part of optimization techniques
used in exact analysis, a simple approximation heuristic, in which
a complex timed Boolean calculus expression at an internal node
is simplified to a new independent variable arriving at the latest
This work was supported by SRC-97-DC-324.
time referred to in the original expression. This simplification is
applied only when the number of terms in the Boolean calculus
expression exceeds a certain limit, to control the computational
complexity. Accuracy loss comes from the fact that the original
functional relationship is completely lost by the substitution. They
also investigated a more powerful approximation technique in [5],
in which each timed Boolean calculus formula is under- and over-
approximated by sum of literals and products of literals respectively
so that each sensitizability check, which is a satisfiability problem in
the exact analysis, can be performed conservatively in polynomial
time. Since this approximation is fairly aggressive to guarantee the
polynomial time complexity, estimated delays do not seem accurate
enough to be useful. Unfortunately their results, shown in [5], are
not clear about the accuracy of approximate delays. They merely
showed ratios of internal nodeswhose delays match the exact delays
at the nodes. No result was shown on the accuracy of circuit delays.
More recently Yalcin et al. [11] proposed an approximation
technique, which utilizes user's knowledge about primary inputs.
They categorize each primary input either as data or control and
label all the internal nodes either data or control using a certain
rule. The sensitization condition at each node is then simplified
conservatively so that it becomes independent of the data variables.
The intuition behind this is that the delay of a circuit is most likely
determined by control signals while data signals have only minor
effects in the final delay. [11] shows experimentally that a dramatic
speed-up is possible without losing much accuracy for unit-delay
timing analysis based on static sensitization. Unfortunately this
sensitization criterion is known to underapproximate true delays,
i.e. it is not a safe criterion, which defeats the whole purpose of timing
analysis. More recently they confirmed that a similar speed-up
and accuracy can be achieved for a correct sensitization criterion
(the floating mode) under the unit-delay model [9]. Although an
application of the same technique to more sophisticated delay models
is theoretically possible, it is not clear whether their algorithm
can handle large circuits under those delay models. Moreover, their
CPU times for exact analysis are much worse than state-of-the-
art implementations available, which cancels some of the speed-up
since their speed-up is reported relative to this slower algorithm 1 .
In this paper we apply their idea of using data/control separation
to a state-of-the-art timing analysis technique [8] to design an approximate
algorithm. The sensitization criterion here is the XBD0
model [8], which is one of the well-accepted delay models shown
to be correct and accurate. In addition a novel technique to control
the complexity of the analysis is proposed. The combination of
these two ideas leads to a new approximation scheme, which for
1 One of the reasons why their exact algorithm is slower is that they try to represent
in BDD all the input minterms that activate the longest sensitizable delay while most
of the state-of-the-art techniques determine the delay without representing those input
minterms explicitly.
some extreme cases shows a speed-up of 70x, while maintaining
accuracy within the noise range.
This paper is organized as follows. Section 2 summarizes false
path analysis, which forms a basis of this work. We specially
focus on the technique proposed in [8]. Section 3 proposes two
approximation schemes and discusses how they can be selectively
applied to trade off accuracy and speed-up. Experimental results
are given in Section 4. Section 5 concludes the paper.
Preliminaries
In this section, we review sensitization theory for the false path
problem. Specifically, the theory developed in [8] is detailed below
since the analysis following this section is heavily based on this
particular theory.
2.1 Functional Delay Analysis
Functional delay analysis, or false path analysis, seeksto determine
when all the primary output signals of a Boolean network become
stable at their final values given maximum delays of each gate
and arrival times at the primary inputs. Since some paths may
never be sensitized, the stable time computed by functional delay
analysis can be earlier than the time computed by topological delay
analysis, thereby capturing the timing characteristic of the network
more accurately. Those paths along which signals never propagate
are called false paths.
The extended bounded delay-0 model [8], the XBD0 model, is
the delay model most commonly used in false path analysis. It is the
underlying model for the floating mode analysis [1] and viability
analysis [7]. Under the XBD0 model, each gate in a network has
a maximum positive delay and a minimum delay which is zero.
Sensitization analysis is done under the assumption that each gate
can take any delay between its maximum value and zero.
The core idea of [8] is to characterize recursively the set of all
input vectors that make the signal value of a primary output stable
to a constant by a given required time. Once these sets are identified
both for constants 0 and 1, one can compare these against the on-set
and the off-set of the primary output respectively to see if the
output is indeed stable for all input vectors by the required time.
The overall scenario of computing true delay is to start by setting
the required time to the longest topological delay minus
gradually decrease it until some input vector cannotmake the output
stable by the required time. The next to the last required time gives
an approximation to the true arrival time at the output. This process
of guessing the next required time can be sped up and refined by
making use of a binary search.
Let us illustrate how we can compute these sets. Let n and dn
be a node (gate) in a Boolean network N and the maximum delay
of the node n respectively 2 . Let - t
n;v be the characteristic function
of the set of input minterms under which the output of the node
becomes stable to a constant v 2 f0; 1g by time t. Let fn be
the local functionality of the node n in terms of immediate fanins
of n. For ease of explanation, let
is a two-input AND gate. It is clear from the functionality of the
AND gate that to set n to a constant 1 by time t, both of the fanins
of are required to be stable at 1 by time This
is equivalent to
Note that the two - functions for the fanins are AND'ed to take
the intersection of the two sets. Similarly, to set n to a constant 0
2 It is possible to differentiate rise delays from fall delays. In this paper, however,
we do not distinguish between them to simplify exposition.
by time t, at least one of the fanins must be stabilized to 0 by time
Here the two - functions are OR'ed to take the union of the two
conditions. It is easy to see that the above computations can be
generalized to the case where the local functionality of n is given
as an arbitrary function in terms of its fanins as follows.
Y
Y
n are the sets of all primes of fn and fn respec-
tively. One can easily verify that the recursive formulations for the
AND gate shown above are captured in this general formulation by
noticing
. The
terminal cases are given when the node n is a primary input x.
where arr(x) denotes the arrival time of x. The above formulas
simply say that a primary input is stable only after its given arrival
time. The key observation of this formulation is that characteristic
functions can be computed recursively.
Once characteristic functions for constants 0 and 1 are computed
at a primary output, two comparisons are made: one for the
characteristic function for 1 against the on-set of the output, and the
other for the characteristic function for 0 against the off-set of the
output. Each comparison is done by creating a Boolean network
which computes the difference between two functions and using a
SAT solver to checkwhether the output of the network is satisfiable.
The Boolean network is called a -network.
2.2 Optimal Construction of -Networks
To argue the approximation algorithms presented in this paper, further
details on the construction of -networks need to beunderstood.
We have mentioned that a -network is constructed recursively from
a primary output. In [8] further optimization to reduce the size of
-networks is discussed.
Given a required time at a primary output, assume that a backward
required-time propagation of N is done to primary inputs so
that the list of all required times at each internal node is computed.
The propagation is done so that all the potential required times are
computed at each node instead of the earliest required time. If the
-network is constructed naively, for each internal node in N , a
distinct node is to be created for each required time in the list. This,
however, is not necessary since it is possible that different required
times exhibit the same stability behavior, in which case having a
single node in the -network for the required times is enough. To
detect such a case a forward arrival-time propagation from primary
inputs to primary outputs is performed to compute the list of all
potential arrival times at each node. Note that each potential arrival
time corresponds to the topological delay of a path from a primary
input to the internal node. Therefore the stability of the node can
only change at those times. In other words between two adjacent
potential arrival times, one cannot see any change in the stability.
Consider an internal node n 2 N . Let
and the sorted list of required times and
that of arrival times respectively at node n. Consider - function
A be the maximum arrival time such that
a . Since there is no event happeningbetween time a j and r i ,
n;v . Matchings from required times to arrival times are
performed in this fashion to identify the subset of A that is required
to compute the final - function. This optimization avoids creating
redundant nodes in the - network thereby reducing the size of the -
network without losing any accuracy in analysis. Only those arrival
times which have a match with required times yield nodes in the -
network.
Another type of optimization suggested in [8] is to generate the
list of arrival times more carefully. For each potential arrival time,
equivalence between the corresponding - function and the on-set
or the off-set (whichever suitable) is checked by a satisfiability call
and a new node is created in - network only if the two functions
are different. Otherwise, the original function or its complement
is used as it is. Although this requires additional CPU time spent
on satisfiability calls, it is experimentally confirmed that the size
reduction of the final - network is so significant that the the total
run-time decreases in most cases.
3 Approximation Algorithms
3.1 Limitation of the Exact Algorithm
Although the exact algorithm proposed in [8] can handle many
circuits of thousands of gates, it still has a size limitation. If a large
network is given and timing analysis is requested under a detailed
delay model like the technology mapped delay model, it is likely
that the algorithm runs practically forever 3 . Even if timing analysis
is tractable, the computation time can be too large to be practical.
As seen in the previous section, the exact timing analysis consists
of repeated SAT solver calls. More precisely, for each time
tested at a primary output, a -network is constructed such that
the network computes the difference between the on-set (off-set)
of the primary output and the set of input vectors which make the
primary output stable to value 1 (0) by the given time. If the output
never becomes 1 for any input assignment, i.e. it is not satisfiable,
we know that the output becomes stable completely by the time
tested. To test whether this condition holds, a SAT formula which
is satisfiable only if the output is satisfiable is created directly from
the - network, and a SAT solver is called on it. The size of the
SAT formula is roughly proportional to the size of the - network.
The main difficulty in the analysis of large networks is that due to
a potentially large size of the - networks, the size of SAT formulas
generated can be too large for a SAT solver to solve even after the
optimization discussed in the previous section has been applied 4 .
In the following we discuss how to control the size of - networks
without losing much accuracy.
3.2 Reducing the Size of - Networks for Effective
Approximation
The main reason why - networks become large in the exact approach
is that - functions at many distinct arrival times must be
computed for internal nodes. This size increase occurs when there
are many distinct path delays to internal nodes due to the reconvergence
of the circuit. Therefore our goal is to control the number
of distinct arrival times considered at each internal node. More
specifically we only create a small number of - functions at each
internal node. This strategy avoids the creation of huge - networks
thereby controlling the size of SAT formulas generated.
Although this idea certainly helps reduce the size of - networks,
it must be done carefully so that the correctness of the analysis is
3 The algorithm is CPU intensive rather than memory intensive since the core part
of the algorithm is SAT.
Theoretically it is not necessarily true that a smaller SAT formula is easier to solve.
However we have observed that the size of SAT formulas is well correlated with the
time the solver takes.
guaranteed. We must never underapproximate true delays since
otherwise the timing analysis could miss timing violations when
used in the context of timing verification. Overapproximation is
acceptable as long as reasonable accuracy is maintained. We guarantee
this property by selectively underapproximating stability of
signals. This underapproximation in turn overapproximates instability
of signals thereby guaranteeing that estimated delays are never
underapproximated.
The key idea on approximation is to modify the mapping from
required times to arrival times discussed in Section 2.2 so that only
a small set of arrival times forms the image of the mapping. Given
the sorted set of required times and the sorted
set of arrival times at an internal node n, the
mapping f : R 7! A used in the exact analysis is defined as
ae
A such that a i - r if r - a 1
Since the stability of the signal at the node increases monotonically
as time elapses by the definition of - functions, it is safe to change
the mapping so that it maps a required time to a time earlier than the
time defined in the above. This corresponds to underapproximation
of the signal stability. Thus, by modifying the mapping under this
constraint so that only a small set of arrival times is required, one
can control the number of nodes to be introduced in the - network
without violating the correctness of the analysis. Depending on
how the original mapping in the exact analysis is changed several
conservative approximation schemes can be devised. Two such
approximation schemes are described next.
3.2.1 Topological Approximation
The most aggressive approximation, which we call topological ap-
proximation, is to map required times either to the topological arrival
time (aq 5 ) or to \Gamma1. More formally, the mapping f T is defined as
follows.
ae
It is easy to see that f T is a conservative approximation of f . Since
no need to create a new node
for the - function in the - network 6 . Instead the node function
or its complement of the original network can be used for the -
function. For the other arrival time \Gamma1, - \Gamma1
1g.
Therefore it is sufficient to have a constant zero node in the -
network and use it for all the cases where the zero function is
needed. Since neither of the arrival times needs any additional
node in the - network, this approximation never increases the size
of the - network. If this reduction is applied at all nodes, the
analysis simply becomes pure topological analysis. Therefore, this
approximation makes sense only if it is selectively invoked on some
subset of nodes. A selection strategy is described later.
3.2.2 Semi-Topological Approximation
Thesecondapproximationscheme, called semi-topological approx-
imation, is slightly milder than the first in terms of the power of
simplifying - networks. In this, required times are mapped to two
arrival times again, but the times chosen are different. The times
to be picked are 1) the arrival time, say ae , matched with r 1 in the
exact mapping f and 2) the topological arrival time aq , which is the
same as in the first approximation. The first approximation and this
one are different only if ae 6= \Gamma1, in which case the second one
5 To be precise, aq can be earlier than the topological arrival time if an intermediate
satisfiability call has already verified that by time aq the signal is stabilized completely.
6 Notice that the - network always includes the original circuit.
gives a more accurate approximation. To be precise, the definition
of the new mapping function f S is as follows.
ae
ae if r ! aq
aq otherwise
If ae 6= \Gamma1, the - function for time ae is now computed explic-
itly, and the corresponding node is added to the - network. Similar
extensions which give tighter approximations are possible by allowing
more arrival times to remain after the mapping. A set of
various approximations gives a tradeoff between compactness of -
networks and accuracy of analysis.
3.3 Control/Data Dichotomy in Approximation Strategie
In [11] Yalcin et al. proposed to use designer's knowledge on
control-data separation of primary inputs for effective approximate
timing analysis. They applied this idea to speed up their timing analysis
technique using conditional delays [10] by simplifying signal
propagation conditions of data variables. We adapt their idea, of
using this knowledge, to the XBD0 analysis to develop a selection
strategy of various approximation schemes.
3.3.1 Labeling Data/Control Types
Given data/control types of all primary inputs, each internal node
is labeled data or control based on the following procedure. All
the nodes in the network are visited from primary inputs to primary
outputs in a topological order. At each node the types of its fanins
are examined. If all of them are data, the node is labeled data;
otherwise it is labeled control. Hence nodes labeled data are pure
data variables with no dependencyon control variables, while those
labeled control are all the other variables with some dependency
on control variables. This labeling policy is different from the one
used in [11], where a node is labeled data if at least one of its
fanins is labeled data. In their labeling, nodes labeled data are
variables with some dependency on data whereas nodes labeled
control are pure control variables. The difference between the two
labelings is whether pure data variables or pure control variables
are distinguished. Our labeling will lead to tighter approximations.
3.3.2 Applying Different Approximations based on
Types
Once all the nodes are labeled, different approximation schemes are
applied at nodes based on their types. The strategy is as follows.
If a node is a control variable, the semi-topological approximation
f S is applied while if a node is a data variable, the topological
approximation f T is applied. The intuition is to use a tighter
approximation for control variables to preserve accuracy while performing
maximum simplification for data variables assuming they
have less impact on delays than control variables.
3.3.3 Extracting Control Circuitry for Further Ap-
proximation
If the approximation so far is not powerful enough to make analysis
tractable, further approximation is possible by extracting only the
control-intensive portion of the circuit and performing timing analysis
on the subcircuit. The extraction of the control portion is done
by stripping off all pure data nodes from the original network under
analysis. Note that any circuit can be decomposed into a cascade
circuit where the nodes in the driving circuit are labeled as data and
those in the driven circuit control by the definition of data variables.
Therefore, the primary inputs of the subcircuit are the boundary
variables which separate the subcircuit from the pure data portion.
We assume conservatively that delays of the pure data portion of
the circuit are the same as topological delays, which gives arrival
times at the primary inputs of the extracted circuit. Analysis is then
performed on this subcircuit as if it were the circuit given. Notice
that this has a similar flavor to the approximation proposed in [4].
The difference between this approximation and the previous
method is that the subcircuit has a new set of primary inputs, which
are assumed independent. However, it is possible that in the original
circuit only a certain subset of signal combinations appears at the
boundary variables. Since this approximation assumes that all signal
combinations can show up, the analysis becomes pessimistic 7 .
For example, if a signal combination which does not appear on the
cut makes a long path sensitizable, it can make delay estimation unnecessarily
pessimistic. Although this method is more conservative
than the one without subcircuit extraction, it reduces the size of a
circuit to be analyzed much more significantly than the other one.
4 Experimental Results
We implemented the new approximation scheme on top of the
implementation of [8] under SIS environment. To evaluate the
effectiveness of the approximation, we focused on timing analysis
of mapped ISCAS combinational circuits, which is generally much
more time-consuming than analysis basedon simpler delay models.
In
Table
1 8 the results on three circuits whose exact analysis takes
more than 20 secondson a DEC Alpha Server 7000/610 are shown 9 .
Each circuit is technology-mapped first with the option specified in
the second column using the lib2.genlib library. The delay
of the circuit is then analyzed using three techniques. The first
one (exact) is the exact method presented in [8]. The remaining
two are approximate methods; the second, called approx(1), is the
technique in Section 3.3.2 and the third, called approx(2), is the one
in Section 3.3.3 which involves subcircuit extraction. Control/Data
specification for the primary inputs of these circuits are the same
as those in [11] 10 . For each of the three analyses, estimated delay
and CPU time are shown in the last two columns. One can observe
that accuracy is preserved in the three examples in both of the
approximation methods while CPU time is reduced significantly.
Table
summarizes a similar experiment for C6288, an integer
multiplier, which is known to be difficult for exact timing analysis
due to a huge amount of reconvergence. Since all the primary inputs
are data variables, the approximate techniques proposed are degenerated
into topological analysis. To avoid this inaccuracy all the
primary inputs were set to control. Note that this sets all intermediate
nodes to control. We then applied the first approximate method
under this labeling. Although the approximation is not so powerful
as the original algorithms, this at least enables us to reduce the size
of - networks without giving up accuracy completely. Since there
is no data variable in the network, only approx(1) was tried. Significant
time saving was achieved with only a slight overapproximation
in terms of analysis quality. The exact analysis is not only more
CPU-time intensive but also much more memory-intensive than the
approximate analysis. In fact we could not completeany of the three
exact analyses within 150MB of memory. They ran out of memory
in a couple of minutes. These exact analyses were possible after
7 If the set of all possible signal combinations at the boundaryvariables can be represented
compactly, one can safely avoid this pessimism by multiplying the additional
constraint to the SAT formula generated.
8 Timing analysis was done in the linear search mode [8] where the decrement time
step is 0.1 and the error tolerance is 0.01.
9 If exact analysis is already efficient, approximation cannot make significant improvement
in CPU time; in fact the overall performance can be degraded due to
additional tasks involved in approximation.
precisely, C1908(1) and C3540(1) in [11] were used.
circuit tech.map #gates topological delay type of approx. estimated delay CPU time
exact 34.77 29.1
exact 35.76 41.2
exact 35.66 727.0
Table
1: Exact analysis vs. Approximate analysis (CPU time in seconds on DEC AlphaServer 7000/610)
circuit tech.map #gates topological delay type of approx. estimated delay CPU times
exact 123.87 7850.2
exact 119.16 18956.2
exact 112.92 15610.5
Table
2: Exact analysis vs. Approximate analysis on C6288 (CPU time in seconds on DEC AlphaServer 7000/610)
the memory limit was expanded to 1GB. The last example needs an
additional explanation. In this example the estimated delay by the
approximate algorithm is smaller than that by the exact algorithm
although in Section 3 we claimed that the approximation algorithm
never underapproximates exact delay. The reason for this is that
the SAT solver is not perfect. Given a very hard SAT problem,
the solver may not be able to determine the result under a given
resource, in which case the solver simply returns Unknown. This is
conservatively interpreted as being satisfiable in the timing analysis.
In this particular example the SAT solver returned Unknown during
the exact timing analysis, which resulted in an overapproximation
of the estimated delay, while in the approximate analysis the SAT
solver never aborted because of the simplification of - networks
and gave a better overapproximation. This example shows that the
approximate analysis gives not only computational efficiency but
also better accuracy in some cases.
To compare the exact and the approximate methods further, we
examined the total CPU time of the exact analysis to see how it can
be broken down. For the first example of C6288 the exact analysis
took 714.7 seconds to conclude that any path of length 123.93 is
false, which is about four times longer for the approximate analysis
to conclude that the delay of the circuit is 123.94. The situation is
much worse in the second example, where the exact analysis took
seconds to conclude that any path of length 119.21 is false
while the approximate method took only about 1.4% of this time to
finish off the entire analysis.
Conclusions
We have proposed new approximation algorithms as an extension to
the XBD0 timing analysis [8]. The core idea of the algorithms is to
control the size of sensitization networks to prevent the size of SAT
formulas to be solved from getting large. The use of knowledge
on data/control separation of primary inputs originally proposed
in [11] was adapted to choose an appropriate approximation at each
node. We showed experimentally that the technique helps simplify
the analysis while maintaining accuracy well within the accuracy
of the delay model.
Acknowledgments
Hakan Yalcin kindly offered detailed data on ISCAS benchmark
circuits.
--R
Path sensitization in critical path problem.
Computation of floating mode delay in combinational circuits: Theory and algorithms.
Computation of floating mode delay in combinational circuits: Practice and implementation.
A new approach to solving false path problem in timing analysis.
A polynomial-time heuristic approach to approximate a solution to the false path problem
Timed boolean calculus and its applications in timing analysis.
Integrating Functional and Temporal Domains in Logic Design.
Delay models and exact timing
Private communication
Hierarchical timing analysis using conditional delays.
An approximate timing analysis method for datapath circuits.
--TR
A polynomial-time heuristic approach to approximate a solution to the false path problem
Hierarchical timing analysis using conditional delays
An approximate timing analysis method for datapath circuits
Integrating Functional and Temporal Domains in Logic Design
--CTR
David Blaauw , Rajendran Panda , Abhijit Das, Removing user specified false paths from timing graphs, Proceedings of the 37th conference on Design automation, p.270-273, June 05-09, 2000, Los Angeles, California, United States
Hakan Yalcin , Mohammad Mortazavi , Robert Palermo , Cyrus Bamji , Karem Sakallah, Functional timing analysis for IP characterization, Proceedings of the 36th ACM/IEEE conference on Design automation, p.731-736, June 21-25, 1999, New Orleans, Louisiana, United States
Mark C. Hansen , Hakan Yalcin , John P. Hayes, Unveiling the ISCAS-85 Benchmarks: A Case Study in Reverse Engineering, IEEE Design & Test, v.16 n.3, p.72-80, July 1999
David Blaauw , Vladimir Zolotov , Savithri Sundareswaran , Chanhee Oh , Rajendran Panda, Slope propagation in static timing analysis, Proceedings of the 2000 IEEE/ACM international conference on Computer-aided design, November 05-09, 2000, San Jose, California | false path;delay computation;timing analysis |
266472 | Sequential optimisation without state space exploration. | We propose an algorithm for area optimization of sequential circuits through redundancy removal. The algorithm finds compatible redundancies by implying values over nets in the circuit. The potentially exponential cost of state space traversal is avoided and the redundancies found can all be removed at once. The optimized circuit is a safe delayed replacement of the original circuit. The algorithm computes a set of compatible sequential redundancies and simplifies the circuit by propagating them through the circuit. We demonstrate the efficacy of the algorithm even for large circuits through experimental results on benchmark circuits. | Introduction
Sequential optimisation seeks to replace a given sequential circuit
with another one optimised with respect to some criterion
area, performance or power, in a way such that the environment
of the circuit cannot detect the replacement. In this work,
we deal with the problem of optimising sequential circuits for
area. We present an algorithm which computes sequential redundancies
in the circuit by propagating implications over its
nets. The redundancies we compute are compatible in the
sense that they form a set that can be removed simultaneously.
Our algorithm works for large circuits and scales better than
those algorithms that depend on state space exploration.
The starting point of our work is [1], in which a method was
described to identify sequential redundancies without exploring
the state space. The basic algorithm is that for any net, two
cases are considered: the net value is 0 and the net value is 1.
For each case, constants as well as unobservability conditions
are learnt on other nets. If some other net is either set to the
same constant for both cases, or to a constant in one case and
is unobservable in the other, it is identified as redundant. For
example, consider the trivial circuit shown in Figure 1. For
the value the net n2 is unobservable and for the value
the net n2 is 1. Thus net n2 is stuck-at-1 redundant.
However, the redundancies found by the method in [1] are not
compatible in the sense that they remain redundant even in the
University of California at Berkeley, Berkeley, CA 94720
Cadence Berkeley Labs, Berkeley, CA 94704
# University of Texas at Austin, Austin,
Figure
1: Example of incompatible redundancies
presence of each other. For instance, the redundancy identification
algorithm will declare both the inputs n 1 and n 2 as
stuck-at-1 redundant. However, for logic optimisation, it is incorrect
to replace both the nets by a constant 1.
The straightforward application of Iyer's method to redundancy
removal is to identify one redundancy by their implication
procedure, remove the redundancy and iterate until con-
vergence. Our goal to learn all compatible implications in the
circuit in one step and use the compatibility of these implications
to remove all the redundancies simultaneously (in this
sense our method for finding compatible unobservabilities is
related to the work in [2, 3] for computing compatible ODC's
(observability don't cares). This is our first contribution. Sec-
ondly, we generalise the implication procedure by combining
it with recursive learning [4] to enhance the capability of the
redundancy identification procedure. Recursive learning lets
us perform case split on unjustified gates so that it is possible
to learn more implications at the expense of computation time.
Consider the circuit in Figure 2. Setting net a to 0 implies that
net f is 0. If we set a to 1, a1 becomes 1, but the AND-gate
connected to remains unjustified. If we perform recursive
learning for the two justifications:
the former case, net f becomes 0, and for the latter case, f becomes
unobservable because e is 1. Thus, for all the possible
cases, either f is 0 or it is unobservable. Hence f is declared
stuck-at-0 redundant. Recursive learning helps identify these
kinds of new redundancies. We present data which shows that
we are able to gain significant optimisations on large benchmark
circuits using these two new improvements. In fact, for
some circuits, we find that recursive learning not only gives us
more optimisation, it is even faster since a previous recursive
learning step makes the circuit simpler for a later stage.
We do not assume designated initial states for circuits. For
sequential optimisation, we use the notion of c-delay replacement
[1, 5]. This notion guarantees that every possible input-output
behaviour that can be observed in the new circuit after
it has been clocked for c cycles after power-up, must have been
present in the old circuit. In contrast to the work in [5, 6], the
synthesis method presented here does not require state space
a
e
d
c
f
Figure
2: Example of recursive learning
a
d
c o1
a
d
c
Figure
3: A circuit and its graph
traversal, and can therefore be applied to large sequential cir-
cuits. Recursive learning has been used earlier for optimi-
sation, as described in [7], but their method is applied only
to combinational circuits and they do not use unobservability
conditions. Another procedure to do redundancy removal is
described in [8], but as [9] shows, their notion of replacement
is not compositional and may also identify redundancies which
destroy the initialisability of the circuit. We have therefore
chosen to use the notion of safe delayed replacement which
preserves responses to all initializing sequences. We are interested
in compositionality because we would like a notion
of replacement that is valid without making any assumptions
about the environment of the circuit. This is why our replacement
notion is safer than that used in [10] which identifies sequential
redundancies by preserving weak synchronizing se-
quences. Their work implicitly assumes that the environment
of the circuit has total control so that it can supply the arbitrary
sequence that the redundancy identification tool has in mind.
Our approach does not pose any such restrictions.
The rest of the paper is organised as follows. In Section 2,
we present our algorithm to compute compatible redundancies
on combinational and sequential circuits. In Section 3, we
present experimental results on some large circuits from the
ISCAS benchmark set. In Section 4, we conclude with some
directions for future work.
Redundancy Removal
We present an algorithm for sequential circuits that have been
mapped using edge-triggered latches, inverters and 2-input
gates; note that any combinational implementation can be
mapped to a circuit containing only inverters and 2-input gates.
We use the notion of circuit graph for explaining our algorithm.
A circuit graph is a labelled directed graph whose vertices correspond
to primary inputs, primary outputs, logic gates and
latches, and edges correspond to wires between the elements
of the circuit. The label of a vertex identifies the type of ele-
Figure
4: Rules for implying constants
ment it represents (e.g. two-input gates, inverters or latches).
We refer to an edge in the circuit graph as a net. Figure 3 shows
an example of a circuit graph.
2.1 Combinational redundancies
We explain our algorithm and prove its correctness for combinational
circuits and later extend it to sequential circuits. Consider
a circuit graph of a circuit, where V is the
set of vertices and E is the set of nets. An assumption A on
the subset P ' E is a labelling of the nets in P by values from
the set f0;1g. Let n 2 P be a net. We
labels the net n with the value v. An assumption is denoted by
an ordered tuple. The set of all possible assumptions on the
set P of nets is denoted by A P . Consider the set
The assumption labeling m with 0 and n with 1 is denoted
by hm 7! 0;n 7! 1i and A
1i;hm 7! 1;n 7! 0i;hm 7! 1;n 7! 1ig. An assumption A 2 A P
is inconsistent if it is not satisfiable for any assignments to the
primary inputs of the circuits. For instance, an assumption of 0
at the input and 1 at the output of an AND gate is inconsistent.
In the algorithm, values are implied at nets in E nP from an
assumption on P. We imply either constants or unobservability
indicators at nets. We indicate unobservability at a net by
implying a symbolic
value\Omega at it. Let be the set
of all possible value that can be implied at any net. An implication
is a label (n is a net and r 2 R. Figure 4
illustrates the rules for implying constants. Rules C1, C2, C3
and C5 are self-explanatory. Rule C4 states that for an AND
gate, 0 at the output and 1 at an input implies 0 at the other
input. Rule C6 states that a constant at some fanout net of a
gate implies the same constant at all other fanout nets. Figure
5 illustrates the rules for
implying\Omega 's. Rule O1 states that
0 at an input of an AND gate implies
a\Omega at the other input.
Rule O2 states that
a\Omega at every fanout net of a gate implies
a\Omega at every fanin net of that gate. Note that constants can
be implied in both directions across a gate
while\Omega propagates
only backwards. We have shown rules only for inverters and
AND gates but similar rules can be easily formulated for other
gates as well. We use these rules to label the edges of the
circuit graph. A constant (0 or 1) label on a net indicates that
\Omega \Omega \Omega \Omega \Omega \Omega
Figure
5: Rules for implying unobservability0a e1
c
b d
\Omega \Omega \Omega
Figure
Overwriting constants with unobservability indicator
the net assumes the respective constant value under the current
assumption.
A\Omega label indicates that the net is not observable
at any primary output. Hence, it can be freely assigned to either
or 1 under the current assumption. Suppose for every
assumption in A P , some net n is labelled either with constant v
or
with\Omega , then we can safely replace n with constant v. This
is because we have shown that under every possible assump-
tion, either the net takes the value v or its value does not affect
the output. We can therefore conclude that net n is stuck-at-v
redundant.
We are concerned about the compatibility of all labellings
because otherwise we run the danger of marking nets with labels
so that all labels are not consistent. For example, consider
the circuit in Figure 1. For the purpose of identifying
redundancies, [1] would infer the implications
from the assumption hn 7! 1i. Additionally, the assumption
hn 7! 0i implies that
hn 7! 0i implies that (notice that we use
the
symbol\Omega to denote compatible observability as opposed to
which simply denotes observability). So, [1] would rightly
claim that both n1 and are stuck-at-1 redundant in isola-
tion; however, for redundancy removal it is easy to see that we
cannot This is why we want to make all labelings compatible.
A sufficient condition for the redundancies to be compatible
is to ensure that the procedure for computing implications from
an assumption returns compatible implications, i.e., every implication
is valid in the presence of all other implications. It is
easy to see that if the labelling of edges in the circuit graph is
done by invoking the rules described above and no label is ever
overwritten, then the set of learnt implications will be compat-
ible. For instance, in the circuit of Figure 1, once n1 is labelled
a\Omega cannot be inferred at because (n1
be overwritten with 0). But this approach is conservative
and will miss some redundancies. In Figure 6, we show
an example where overwriting a constant with
a\Omega yields a
redundancy which could not have been found otherwise. We
propagate implications from assumptions on the net a. The
redundancy remove
/* find and remove redundancies from the circuit graph */
while (there is an unvisited net n in the circuit graph) f
S := learn implications (G , hn 7! 1i)
S := learn implications (G , hn 7! 0i)
R := T -T
for every implication set net n to constant v
propagate constants and simplify
learn implications
propagate implications on the circuit graph given an assignment */
f
forall n such that A : n 7! v f
label n / v
while (some rule can be invoked) f
b) be the new implication
if (b
label n / b
conflicts with a current label)
return
else
label n / b
return set of all current labels
Figure
7: Combinational redundancy removal algorithm
implications from ha 7! 0i are written below and those from
ha 7! 1i are written above the wires. Note that while propagating
implications from ha 7! 1i, a2 and d are initially labeled
with 1 but after labelling c with 0, the labels at d and
a2 are successively overwritten
's. Hence, a2 is found
to be stuck-at-0 redundant. As a result, the OR gate can be re-
moved. We prove later in this section that this overwriting does
not make previously learnt implications invalid, i.e., compatibility
of implications is maintained, if the only overwriting that
is allowed is that of constants with unobservability indicators.
Our algorithm for removing combinational redundancies is
given in Figure 7. The function learn implications takes as input
an assumption A on an arbitrary subset of nets and labels
nets with values from f0;1; learnt through implications.
Initially all nets n such that A : n 7! v is an assumption, are la-
belled. Then we derive new labels by invoking the rules C1-C6
and O1-O2 and similar rules for other kinds of two input gates.
Note that at all times each net has a unique label and constants
can be overwritten
with\Omega 's but not vice-versa. It returns the
set of all final labels. The function redundancy remove takes
as input a circuit graph G and calls learn implications successively
with assumptions hn i 7! 0i and hn i 7! 1i on the singleton
subset fn i g. The two sets of labels are used to compute
all pairs n and v such that n is stuck-at-v redundant. We
later show that our labelling procedure for learning implications
guarantees that all such redundancies can be removed
f
d
c
d
e
f
a
c e
a
Figure
8: An implication graph
simultaneously. These redundancies are used to simplify the
network. The process is repeated until all nets have been con-
sidered. Note that the function redundancy remove considers
assumptions on only a single net but in general any number of
nets could be used to generate assumptions. We later show results
for the case when we considered assumptions on two nets,
the second one corresponding to the unjustified node closest to
This is an instance of recursive learning.
We now formalise the notion of a valid label as one for
which an implication graph exists. We will use the notion
of implication graph for proving the compatibility of the set
of labels generated by the algorithm. Let A be an assumption
on a set P of nets. An implication graph for the label
from assumption A is a directed acyclic graph G I = (V I
where L I is a set of labels of the form (m = a) for some net m
and some a 2 f0;1; labelling every vertex v 2V I , such that
ffl Every root 1 vertex is labelled with
m 7! a
ffl There is exactly one leaf 2 vertex v 2V I which is labelled
ffl For any vertex v 2 V I , if v is not a root node the implication
labelling it can be obtained from the implications
labelling its parents by invoking an inference rule.
An example of an implication graph for the label
from the assumption hn8 7! 1i is shown in Figure 8. A set of labels
C derived from an assumption A is compatible if for every
label C2C there exists an implication graph
of C from A such that LC ' C .
We now prove the compatibility of implications returned by
our labelling procedure. At each step, the labelling procedure
either labels a node for the first time or overwrites a constant
with
a\Omega . We prove the invariant that at any time, the current
set of implications C is compatible. We must prove that if a
label is overwritten with a new label, every other label must
have an implication graph which does not depend on the over-written
label. This claim is proved in the following lemma
and is needed for all current labels to be simultaneously valid.
1 A vertex with no incoming edges
A vertex with no outgoing edges
Note that overwriting a 0 with a 1 (or vice-versa) implies an
inconsistent assumption and the procedure exits.
Lemma 2.1 Let A be a consistent assumption. If a label
a) is overwritten by the label (m
=\Omega ) in the current set of
labels, then for all labels (n there is an implication
graph such that a) is not a label of any vertex in the
graph.
Proof: We call net m a parent of net n if there is a node v of the
circuit graph such that m is an incoming arc and n an outgoing
arc of v. We also say that n is a child of m. We say m is a sibling
of n if there is a node v such that both m and n are outgoing
edges of v.
We prove the claim by contradiction. Suppose it is false.
Let the replacement of a) by (m
=\Omega ) be the first instance
that makes it false. Therefore, there was an implication
graph for each current implication before this happened. Let
be an implication that does not have a valid implication
graph now. Consider any path in the old implication graph
for a net n j , (n
is the ith implication on the path. We consider the case where
b j is a constant. Hence, all b k 's in the path are constants since
a\Omega at a net can only imply
a\Omega at another. The case in which
=\Omega is considered later. We show that if the assumption A is
consistent then it is possible to replace n in the implication
graph for n j . There are three cases on the relation between
Case 1: The circuit edge n i\Gamma1 is a child of n i
.\Omega can be
inferred at n i only if either n
=\Omega is a current implication
or current implication and n i 0
and n i are inputs to
an AND gate. In the first case, the fact that an implication
graph existed in which n i\Gamma1 was labelled with a constant is
contradicted. In the second case, n i\Gamma1 is the output of an AND
gate, whose two inputs are n i and n i 0 . Since (n
and In either
case
Case 2: n i\Gamma1 and n i are siblings and (n
is an application of Rule C6. If n i+1 is either the parent or a
sibling of n i then can be removed from the implication
graph for
implication. If n i+1 is a child of n i ,
then\Omega can be inferred at
=\Omega is a current implication or n i
is a current implication and n i and n i are inputs to an AND
gate. In the first case, the fact that an implication graph existed
in which n i+1 was labelled with a constant is contradicted. In
the second case, clearly n i+1 is labelled with 0, i.e., b
otherwise the assumption A is inconsistent, and the path (n
can be replaced by the path (n
Note that to get a new implication graph for
we need the implication graph for n but that exists
and is not affected by the overwriting of the previous label of
with\Omega .
e
a g
x
c
d
f
y
Figure
9: Sequential circuit C
Case 3: n i\Gamma1 is a parent of n i . The reasoning is same as in
Case 2.
Thus we have shown that if the assumption was consistent,
each vertex labelled with (n in the implication graph of a
current implication (n can be replaced with some other
current implication. This shows that the replacement of n
by
=\Omega does not falsify the claim which is a contradiction.
Now we consider the case in which b j
=\Omega . Then, there
is a greatest k such that b k is a constant, b l is constant for all
=\Omega for all k ! l - j. From the proof before,
we know there exists an implication graph for n k in which
not used. This yields an implication graph for
in which n used.
Lemma 2.2 Let A be a consistent assumption. Then the set of
labels returned by the algorithm is compatible.
Proof: At each step in the algorithm, either a value is implied
at a net for the first time or a constant is overwritten by
a\Omega .
The proof of this lemma follows by induction on the number
of steps of the algorithm and by using Lemma 2.1 to prove the
induction step.
Theorem 2.1 Let n i stuck-at-v i redundant, for all 1
be the set of redundant faults reported by the algorithm. Then
the circuit obtained by setting combinationally
equivalent to the original.
2.2 Sequential redundancies
Now we extend the algorithm for combinational circuits described
in the previous section to find sequential redundancies
by propagating implications across latches. The implications
may not be valid on the first clock cycle since the latches
power-up nondeterministically and have a random boolean
value initially. Nevertheless, we can use the notion of k-
delayed replacement which requires that the modified circuit
produce the same behaviour as the original only after k clock
cycles have elapsed. Thus, for example, if implying constant
v at a latch output from constant v at its input yields a redun-
dancy, a 1-delay replacement 3 is guaranteed on the removal of
that redundancy.
3 If we have latches where a reset value is guaranteed on the first cycle of
operation, it is sufficient to ensure that the constant v is equal to the reset value;
in this case the replacement is a 0-delay replacement.
Figure
10: A sequential implication graph from assumption
a for the circuit C
Figure
11: An incorrect sequential implication graph from assumption
a for the circuit C
The notion of a label in the implication graph is modified
so that it also contains an integer time offset with respect to
a global symbolic time step t. The rules for learning implications
are exactly the same as before with the addition of a new
rule which allows us to propagate implications across latches:
when we go across a latch we modify the time offset accord-
ingly, e.g. if the output of a latch is labelled with 1 and offset
-2, the input of the latch can be labelled with 1 and offset -3.
An example of an implication graph for the circuit C in Figure
9 is shown in Figure 10.
This example also shows a potential problem with learning
sequential implications. Consider the circuit C in Figure 9.
For the two assumptions ha t 7! 0i (a is 0 at t and t denotes
the global symbolic time) and ha t 7! 1i we get two implication
graphs (in Figures 10 and 11) which both imply (c
This might lead us to believe that the
dundancy. However, the new circuit obtained by replacing c
with 0, if it powers up in state 11 (each latch at 1), remains forever
in 11 with the circuit output x = 1. However, the original
circuit produces no matter which state
it powers up in. Thus we do not have a k-delay replacement
for any k. The reason for this incorrect redundancy identification
is that in order to infer (c from the assumption
needed (c
with 0 (i.e., for all times), c could not have been 1 at t + 1.
One way of solving the above problem is to ensure that no
net is labelled with different labels for different times. We will
label a net with at most one label, and if a net is labelled we
will associate a list of integers with this label which denotes
the time offset when this label is valid. Thus, for the above
example, during the implication propagation phase for the assumption
never infer (a and we will
not get the second implication graph in Figure 10. Labeling
one net with at most one label also obviates the need for the
validation step described in [1].
The algorithm replaces a net n with the constant v if for some
time offset t 0 , it is either labelled with v or is unobservable for
all assumptions. With each such replacement, we associate a
time k as follows [1]. To validate a redundancy n stuck-at-v
at time t 0 , we have a set of implication graphs, one for each
assumption, that imply either n t 0
=\Omega . Let t 00 be
the least time offset on any label in these implication graphs
such that for some net m, m t 00
is labelled with a constant. Then
We say that n is k-cycle
stuck-at-v redundant. We use the following theorem to claim
that the circuit obtained by replacing net n with constant v is a
k-delayed safe replacement.
Lemma 2.3 ([1]) Let a net n be k-cycle stuck-at-v redundant.
Then the circuit obtained by setting net results in a k-
delayed safe replacement of the original circuit.
As in the combinational case, we allow overwriting of constants
with unobservability indicators. We make sure that the
label at net n at time t +a is overwritten only if the new label
is\Omega and net n is not labelled at any other time offset (this is to
prevent the problem shown in Figure 11). This may make our
algorithm dependent on the order of application of rules, but
we have not explored the various options. The proof of the following
two lemmas follows by easy extensions of Lemmas 2.1
and 2.2.
Lemma 2.4 Let A be a consistent assumption. If a label m
a is replaced with m t
=\Omega in the current set of labels, then
for all labels m t
there is an implication graph such that
a is not a label in the graph.
Lemma 2.5 Let A be a consistent assumption. Then the set of
labels returned by the algorithm is compatible.
Hence, the redundancies reported by the algorithm are compatible
with each other and all redundancies can be removed
simultaneously to get a delayed safe replacement.
Theorem 2.2 Let n i k i -cycle stuck-at-v i redundant, for all 1 -
be the set of redundant faults reported by the algorithm.
Then, the circuit obtained by setting net
K-delay safe replacement of the
original.
Proof: From Lemma 2.5, we know from that for all 1
redundant in the circuit obtained by
setting It has been shown in [5] that
for any circuits C, D and E, if C is an a-delay replacement
for D and D is a b-delay replacement for E then C is (a
delay replacement for E. The desired result follows easily by
induction on n from this property of delay replacements.
3 Experimental Results
We present some experimental results for this algorithm. We
demonstrate that our approach of identifying sequential redundancies
yields significant reduction in area and is better than
Circuit Redundancy Removal With Recursive Learning
Name red LR A1 % red LR A2 %
cordic
For legend see Table 2.
Table
1: Experimental results for combinational redundancies
the approach which removes only combinational redundancies.
We also show that for most examples, recursive learning gives
better results then the simple implication propagation scheme.
In fact for many circuits, recursive learning could identify redundancies
where the simple implication propagation scheme
is unable to find any.
This algorithm was implemented in SIS [11]. The circuit
was first optimised using script.rugged which performs
combinational optimisation on the network. The optimised circuit
was mapped with a library consisting of 2-input gates and
inverters. The sequential redundancy removal algorithm was
run on the mapped circuit. The propagation of implications
was allowed to propagate 15 time steps forward and 15 time-steps
backward from the global symbolic time. Table 2 shows
the mapped (to MCNC91 library) area of the circuits obtained
by running script.ruggedand that obtained by starting from
that result and applying redundancy removal algorithm. For
very large circuits (s15850 and larger), BDD operations during
the full simplify step in script.ruggedwere not per-
formed. We report results for those circuits on which our algorithm
was able to find redundancies.
As mentioned earlier, our algorithm starts with an assumption
on the nets and implies values on other nets of the circuit.
We implemented two flavors of selection of assumptions. In
the first case a conflicting assignment was assumed on one net
and values were implied on other nets. The second case was
similar to the first except that once the implications could not
propagate for an assumption on a net, we performed a na-ve
Circuit Attributes Redundancy Removal With Recursive Learning
Name PI PO L A red C LR A1 % time red C LR A2 % time
s953
43 26 183 3775
28 7035 8.4 66.9 92 733 70 6317 17.8 32.1
43 9380 10.0 493.7
s38417* 28 106 1464 33055 591 887 42 31943 3.4 1139.4 1129 9245 97 29718 10.1 1763.7
* full simplify not run.
All times reported on an Alpha 21164 300MHz dual processor with 2G of memory.
PI number of primary inputs PO number of primary outputs L number of latches
A Mapped area after script.rugged A1 Mapped area after redundancy removal A2 Mapped area after redundancy removal with recursive learning
red number of redundancies removed LR Number of latches removed C Upper bound on c, where the new circuit is a c-delay replacement
time CPU time % Percentage area reduction
Table
2: Experimental results for sequential redundancies
version of case splitting only on the net which was closest to
the original net from which the implications were propagated
and implications common in the two cases were also added in
the set of implications learnt for the original net. 4 This enabled
us to propagate implications over a larger set of nets in
the network and hence to discover more redundancies at the
expense of CPU time. Table 2 indicates the area reduction
obtained both by simple propagation and by performing this
recursive learning. We find that even for this na-ve recursive
learning we get reduction in area in most of the circuits over
that obtained without case split. For instance, for S5378 we
were able to obtain 37.5% area reduction with recursive learning
as against 19.6% without it. For most of the medium sized
circuits we were not able to obtain any reduction in area without
recursive learning. For large circuits also we were able to
obtain approximately 5-10% area reduction. S35952 was an
exception where we did not obtain any more reduction in area.
Except for this circuit the CPU time for recursive learning was
less than twice the CPU time for redundancy removal without
it. This suggests that more sophisticated recursive learning
4 If a node is unjustified during forward propagation of implications then
case-split is performed by setting the output net to 0 and 1. If the node is
unjustified during backward propagation case split is achieved by setting one
of the two inputs to the input controlling value (0 for (N)AND gate and 1 for
(N)OR gate) at a time and propagating the implications backward.
based techniques could yield larger area reduction without prohibitive
overhead in terms of CPU time.
Since our algorithm also identified combinational redundan-
cies, we wanted to quantify how many of the redundancies
were purely combinational. To verify this we ran our algorithm
on the circuits for combinational redundancy removal
only.
Table
1 shows the area reduction due to combinational redundancies
only with and without recursive learning. In most
cases, the number of redundancies identified in Table 2 is significantly
larger than the set of combinational redundancies
identified by our algorithm. Only for S35952 and S953 did the
combinational redundancy removal result in approximately the
same area reduction as the sequential redundancy case.
For the example circuits presented here we were able to
achieve 0-37% area reduction. In a number of cases the algorithm
was able to remove a significant number of latches. In
all cases, the new circuit is a C-delay safe replacement of the
original circuit. The C reported in Table 2 is actually an upper
bound. For most of the delay replaced circuits C ! 10000.
However most practical circuits operate at speeds exceeding
100 MHz in present technology. C ! 10000 for a circuit would
require the user to wait for at most 100 -s before useful operation
can begin. This is not a severe restriction.
We are unable to compare sequential redundancy removal
results with the previous work of Entrena and Cheng [8] because
as we noted earlier, their notion of sequential replace-
ment, which is based on the conservative 0,1,X-valued simula-
tion, is not compositional (unlike the notion of delay replacement
that we use).
4 Future Work
Our redundancy removal algorithm does not find the complete
set of redundancies. We can extend this scheme in several
ways to identify larger sets. For instance, instead of analyzing
two assumptions due to a case split on a single net we could
case split on multiple nets and intersect the implications learnt
on this larger set of assumptions. One such method is to incrementally
select those which are at the frontier where the
first phase of implications died out. Additionally, if we split
on multiple nets it is possible to detect pairs of nets such that
if one is replaced with another the circuit functionality does
not change. With our current approach, because we split on
a single net, one of the nets in this pair is always a 1 or a 0,
which means that we are only identifying stuck-at-constant redundancies
For this algorithm we map a given circuit using a library of
two input gates and inverters. A different approach would be
to use the original circuit and propagate the implications forward
and backward by building the BDD's for the node function
in terms of it's immediate fanins. We intend to compare
the running times and area reduction numbers of our approach
with such a BDD based approach. In addition, BDD based
approaches may allow us to do redundancy removal for multi-valued
logic circuits as well in a relatively inexpensive way.
We can extend the notion of redundancy for multi-valued circuits
to identify cases where a net can take only a subset of its
allowed values. Then latches of this kind can be encoded using
fewer bits.
Acknowledgements
We had very useful discussions with Mahesh Iyer and Miron
Abramovici during the course of this work. The comments by
the referees also helped to improved the paper.
--R
"Identifying Sequential Redundancies Without Search,"
"The Transduction Method - Design of Logic Networks Based on Permissible Functions,"
Don't Cares in Multi-Level Network Optimiza- tion
"Recursive Learning: A New Implication Technique for Efficient Solution to CAD Problems - Test, Verification and Optimization,"
"Ex- ploiting Power-up Delay for Sequential Optimization,"
"Latch Redundancy Removal without Global Reset,"
"LOT: Logic Optimization with Testability - New Transformations using Recursive Learning,"
"Sequential Logic Optimization by Redundancy Addition and Removal,"
On Redundancy and Untestability in Sequential Circuits.
"On Removing Redundancies from Synchronous Sequential Circuits with Synchronizing Sequences,"
"SIS: A System for Sequential Circuit Synthesis,"
--TR
The Transduction Method-Design of Logic Networks Based on Permissible Functions
Don''t cares in multi-level network optimization
Exploiting power-up delay for sequential optimization
On Removing Redundancies from Synchronous Sequential Circuits with Synchronizing Sequences
On redundancy and untestability in sequential circuits
sequential redundancies without search
Sequential logic optimization by redundancy addition and removal
Latch Redundancy Removal Without Global Reset
--CTR
Vigyan Singhal , Carl Pixley , Adnan Aziz , Shaz Qadeer , Robert Brayton, Sequential optimization in the absence of global reset, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.8 n.2, p.222-251, April | sequential optimization;recursive learning;sequential circuits;safe delay replacement;compatible unobservability |
266476 | Decomposition and technology mapping of speed-independent circuits using Boolean relations. | Presents a new technique for the decomposition and technology mapping of speed-independent circuits. An initial circuit implementation is obtained in the form of a netlist of complex gates, which may not be available in the design library. The proposed method iteratively performs Boolean decomposition of each such gate F into a two-input combinational or sequential gate G, which is available in the library, and two gates H/sub 1/ and H/sub 2/, which are simpler than F, while preserving the original behavior and speed-independence of the circuit. To extract functions for H/sub 1/ and H/sub 2/, the method uses Boolean relations, as opposed to the less powerful algebraic factorization approach used in previous methods. After logic decomposition, overall library matching and optimization is carried out. Logic resynthesis, performed after speed-independent signal insertion for H/sub 1/ and H/sub 2/, allows for the sharing of decomposed logic. Overall, this method is more general than existing techniques based on restricted decomposition architectures, and thereby leads to better results in technology mapping. | Introduction
Speed-independent circuits, originating from D.E. Muller's work [12], are hazard-free under the unbounded
gate delay model . With recent progress in developing efficient analysis and synthesis techniques, supported
by CAD tools, this sub-class has moved closer to practice, bearing in mind the advantages of speed-
independent designs, such as their greater temporal robustness and self-checking properties.
The basic ideas about synthesis of speed-independent circuits from event-based models, such as Signal
Transition Graphs (STGs) and Change Diagrams, are described e.g. in [4, 9, 6]. They provide general
conditions for logic implementability of specifications into complex gates . The latter are allowed to have
an arbitrary fanin and include internal feedback (their Boolean functions being self-dependent).
To achieve greater practicality, synthesis of speed-independent circuits has to rely on more realistic
assumptions about implementation logic. Thus, more recent work has been focused on the development
of logic decomposition techniques. It falls into two categories. One of them includes attempts to achieve
logic decomposition through the use of standard architectures (e.g. the standard-C architecture mentioned
below). The other group comprises work targeting the decomposition of complex gates directly, by finding
a behavior-preserving interconnection of simpler gates. In both cases, the major functional issue, in addition
to logic simplification, is that the decomposed logic must not violate the original speed-independent
specification. This criterion makes the entire body of research in logic decomposition and technology
mapping for speed-independent circuits quite specific compared to their synchronous counterparts.
This work has been partially supported by ACiD-WG (ESPRIT 21949), UK EPSRC project ASTI GR/L24038 and
CICYT TIC 95-0419
Two examples of the first category [1, 8] present initial attempts to move from complex gates to a
more structured implementation. The basic circuit architecture includes C elements (acting as latches)
and combinational logic, responsible for the computation of the excitation functions for the latches.
This logic is assumed to consist of AND gates with potentially unbounded fain and unlimited input
inversions and bounded fanin OR gates. Necessary and sufficient conditions for implementability of
circuits in such an architecture (called the standard-C architecture), have been formulated in[8, 1]. They
are called Monotonic Cover (MC) requirements. The intuitive objective of the MC conditions is to
make the first level (AND) gates work in a one-hot fashion with acknowledgment through one of the C-
elements. Following this approach, various methods for speed-independent decomposition and technology
mapping into implementable libraries have been developed,e.g. in [14] and [7]. The former method only
decomposes existing gates (e.g., a 3-input AND into two 2-input ANDs), without any further search of
the implementation space. The latter method extends the decomposition to more complex (algebraic)
divisors, but does not tackle the limitation inherent in the initial MC architecture.
The best representative of the second category appears to be the work of S. Burns [3]. It provides
general conditions for speed-independent decomposition of complex (sequential) elements into two sequential
elements (or a sequential and a combinational element). Notably, these conditions are analyzed
using the original (unexpanded) behavioral model, thus improving the efficiency of the method. This
work is, in our opinion, a big step in the right direction, but addresses mainly correctness issues. It does
not describe how to use the efficient correctness checks in an optimization loop, and does not allow the
sharing of a decomposed gate by different signal networks. The latter issues were successfully resolved in
but only within a standard architecture approach.
In [15, 13] methods for technology mapping of fundamental mode and speed-independent circuits
using complex gates were presented. These methods however only identify when a set of simple logic
gates can be implemented as a complex gate, but cannot perform a speed-independent decomposition of
a signal function in case it does not fit into a single gate. In fact, a BDD-based implementation of the
latter is used as a post-optimization step after our proposed decomposition technique.
In our present work we are considering a more general framework which allows use of arbitrary gates
and latches available in the library to decompose a complex gate function, as shown in Figure 1. In that
respect, we are effectively making progress towards the more flexible second approach. The basic idea of
this new method is as follows.
An initial complex gate is characterized by its function F . The result of decomposition is a library
component designated by G and a set of (possibly still complex) gates labeled H . The latter
are decomposed recursively until all elements are found in the library and optimized to achieve the lowest
possible cost. We thus by and large put no restrictions on the implementation architecture in this work.
However, as will be seen further, for the sake of practical efficiency, our implemented procedure deals only
with the 2-input gates and/or latches to act as G-elements in the decomposition. The second important
change of this work compared to [7] is that the new method is based on a full scale Boolean decomposition
rather than just on algebraic factorization. This allows us to widen the scope of implementable solutions
and improve on area cost (future work will tackle performance-oriented decomposition).
Our second goal in generalizing the C-element based decomposition has been to allow the designer
to use more conventional types of latches, e.g. D-latches and SR-latches, instead of C-elements that may
not exist in conventional standard-cell libraries. Furthermore, as our experimental results show (see
Section 6), in many cases the use of standard latches instead of C-elements helps improving the circuit
implementations considerably.
The power of this new method can be appreciated by looking at the example hazard.g taken from
a set of asynchronous benchmarks. The original STG specification and its state graph are shown in
Figure
2,a and b. The initial implementation using the "standard C-architecture" and its decomposition
using two input gates by the method described in [7] are shown in Figure 2,c and d. Our new method
produces a much cheaper solution with just two D-latches, shown in Figure 2,e. Despite the apparent
triviality (for an experienced human designer!) of this solution, none of the previously existing automated
tools has been able to obtain it. Also note that the D-latches are used in a speed-independent fashion,
G
F
Hn
y
y
Figure
1: General framework for speed-independent decomposition
and are thus free from meta-stability and hazard problems 1 .
a,d - inputs
outputs
d
a
z
c
c
d
c
d D
R c
z
R
R c
R c
C z
(a)
c-
z-
a-
a- d-
(b)
a-
a-
d-
d-
z-
c-1000
a-
a z
z a
c
a d
a
d c
c
z
a z
a
z
c
z
d
(c) (d)
Figure
2: An example of Signal Transition Graph (a), State Graph (b) and their implementation (c)(d)(e)
(benchmark hazard.g)
The paper is organized as follows. Section 2 introduces the main theoretical concepts and notation.
Section 3 presents an overview of the method. Section 4 describes the major aspects of our Boolean
relation-based decomposition technique in more detail. Section 5 briefly describes its algorithmic imple-
mentation. Experimental results are presented in Section 6, which is followed by conclusions and ideas
about further work.
Background
In this section we introduce theoretical concepts and notation required for our decomposition method.
Firstly, we define State Graphs , which are used for logic synthesis of speed-independent circuits. The
State Graph itself may of course be generated by a more compact, user-oriented model, such as the
Signal Transition Graph. The State Graph provides the logic synthesis procedure with all information
necessary for deriving Boolean functions for complex gates. Secondly, the State Graph is used for a
property-preserving transformation, called signal insertion. The latter is performed when a complex
gate is decomposed into smaller gates, and the thus obtained new signals must be guaranteed to be
speed-independent (hazard-free in input/output mode using the unbounded gate delay model).
1 For example, all transitions on the input must be acknowledged by the output before the clock can fall and close the
latch. E.g. there is no problem with setup and hold times as long as the propagation time from D to Q is larger than both
setup and hold times, which is generally the case.
2.1 State Graphs and Logic Implementability
A State Graph (SG) is a labeled directed graph whose nodes are called states . Each arc of an SG is
labeled with an event , that is a rising (a+) or falling (a\Gamma) transition of a signal a in the specified circuit.
We also allow notation a if we are not specific about the direction of the signal transition. Each state
is labeled with a vector of signal values. An SG is consistent if its state labeling v is such
that: in every transition sequence from the initial state, rising and falling transitions alternate for each
signal. Figure 2,b shows the SG for the Signal Transition Graph in Figure 2,a, which is consistent. We
write s a
there is an arc from state s (to state s 0 ) labeled with a.
The set of all signals whose transitions label SG arcs are partitioned into a (possibly empty) set of
inputs, which come from the environment, and a set of outputs or state signals that must be implemented.
In addition to consistency, the following two properties of an SG are needed for their implementability in
a speed-independent logic circuit.
The first property is speed-independence. It consists of three parts: determinism, commutativity and
output-persistence. An SG is called deterministic if for each state s and each label a there can be at most
one state s 0 such that s a
An SG is called commutative if whenever two transitions can be executed
from some state in any order, then their execution always leads to the same state, regardless of the order.
An event a is called persistent in state s if it is enabled at s and remains enabled in any other state
reachable from s by firing another event b . An SG is called output-persistent if its output signal events
are persistent in all states and no output signal event can disable input events. Any transformation (e.g.,
insertion of new signals for decomposition), if performed at the SG level, may affect all three properties.
The second requirement, Complete State Coding (CSC), becomes necessary and sufficient for the
existence of a logic circuit implementation. A consistent SG satisfies the CSC property if for every pair of
states s; s 0 such that the set of output events enabled in both states is the same. (The SG
in
Figure
2,b is output-persistent and has CSC.) CSC does not however restrict the type of logic function
implementing each signal. It requires that each signal is cast into a single atomic gate. The complexity
of such a gate can however go beyond that provided in a concrete library or technology.
The concepts of excitation regions and quiescent regions are essential for transformation of SGs, in
particular for inserting new signals into them. A set of states is called an excitation region (ER) for event
a (denoted by ER(a it is the set of states such that s 2 ER(a ) , s a
!. The quiescent region (QR)
(denoted by QR(a )) of a transition a , with excitation region ER(a ), is the set of states in which a is
stable and keeps the same value, i.e. for ER(a+) (ER(a\Gamma) ), a is equal to 1(0) in QR(a+) (QR(a\Gamma)).
Examples of ER and QR are shown in Figure 2,b.
2.2 Property-preserving event insertion
Our decomposition method is essentially behavioral - the extraction of new signals at the structural
(logic) level must be matched by an insertion of their transitions at the behavioral (SG) level. Event
insertion is an operation on an SG which selects a subset of states, splits each of them into two states and
creates, on the basis of these new states, an excitation region for a new event. Figure 3 shows the chosen
insertion scheme, analogous to that used by most authors in the area [16]. We shall say that an inserted
signal a is acknowledged by a signal b, if b is one of the signals delayed by the insertion of a, (the same
terminology will be used for the corresponding transitions). For example, d acknowledges x in Figure 3.
State signal insertion must preserve the speed-independence of the original specification. The events
corresponding to an inserted signal x are denoted x , x+, x\Gamma, or, if no confusion occurs, simply by x.
Let A be a deterministic, commutative SG and let A 0 be the SG obtained from A by inserting event x.
We say that an insertion state set ER(x) in A is a speed-independence preserving set (SIP-set) iff: (1)
for each event a in A, if a is persistent in A, then it remains persistent in A 0 , and (2) A 0 is deterministic
and commutative. The formal conditions for the set of states r to be a SIP-set can be given in terms of
intersections of r with the so-called state diamonds of SG [5]. These conditions are illustrated by Figure
4, where all possible cases of illegal intersections of r with state diamonds are shown. The first (rather
(a)
d c b a
(b)
d
x
Figure
3: Event insertion scheme: (a) before insertion, (b) after insertion
inefficient) method for finding SIP-sets based on a reduction to the satisfiability problem was proposed
in [16]. An efficient method based on the theory of regions has been described in [5].
d
r
a
d
a
s3 d
d
a
r
d
a
r
d
a) b) c)
Figure
4: Possible violations of SIP conditions
Assume that the set of states S in an SG is partitioned into two subsets which are to be encoded by
means of an additional signal. This new signal can be added either in order to satisfy the CSC condition,
or to break up a complex gate into a set of smaller gates. In the latter case, a new signal represents the
output of the intermediate gate added to the circuit. Let r and r denote the blocks of such a
partition. For implementing such a partition we need to insert transitions of the new signals in the border
states between r and r.
The input border of a partition block r, denoted by IB(r), is informally a subset of states of r by which
r is entered. We call IB(r) well-formed if there are no arcs leading from states in r \Gamma IB(r) to states in
IB(r). If a new signal is inserted using an input border, which is not well-formed, then the consistency
property is violated. Therefore, if an input border is not well-formed, its well-formed speed-independent
preserving closure is constructed, as described by an algorithm presented in [7].
The insertion of a new signal can be formalized with the notion of I-partition ([16] used a similar
definition). Given an SG, A, with a set of states S, an I-partition is a partition of S into four blocks:
QR(x\Gamma)g. QR(x\Gamma)(QR(x+)) defines the states in which x will have the
stable value 0 (1). ER(x+) (ER(x\Gamma)) defines the excitation region of x in the new SG A 0 . To distinguish
between the sets of states for the excitation (quiescent) regions of the inserted signal x in an original
SG A and the new SG A 0 we will refer to them as ERA (x ) and ER A 0
respectively. If the insertion of x preserves consistency and persistency, then the only transitions crossing
boundaries of the blocks are the following: QR(x\Gamma) ! ERA
Example 2.1 Figure 5 shows three different cases of the insertion of a new signal x into the SG for
the hazard.g example. The insertion using ERA (x+) and ERA (x\Gamma) of Figure 5,a does not preserve
speed-independence as the SIP set conditions are violated for ERA (x+) (violation type in Figure 4,b).
When signal x is inserted by the excitation regions in Figure 5,b then its positive switching is acknowledged
by transitions a\Gamma, d+, while its negative switching by transition z \Gamma. The corresponding excitation
regions satisfy the SIP conditions and the new SG A 0 , obtained after insertion of signal x, is shown in
Figure
5,b. Note that the acknowledgment of x+ by transitions a\Gamma, d+ results in delaying some input
signal transitions in A 0 until x+ fires. This changes the original I/O interface for SG A, because it
requires the environment to look at a new signal before it can change a and d. This is generally incorrect
(unless we are also separately finding an implementation for the environment or we are working under
appropriate timing assumptions), and hence this insertion is rejected.
a,d - inputs
outputs a+
a-
a-
d-
d-
z-
c-1000
a-
A
A
a-
a-
d-
d-
c-
a-
a-
a-
a-
d-
d-
z-
c-1000
a-
(a)
A
A
A'
x-
x-
z-
(b)
A'
a-
a-
d-
d-
z-
c-1000
a-
A
A
(c)
Figure
5: Different cases of signal insertion for benchmark hazard.g: violating the SIP-condition (a),
changing the I/O interface (b), correct insertion (c)
The excitation regions ERA (x+) and ERA (x\Gamma) shown in Figure 5,c are SIP sets. They are well-formed
and comply with the original I/O interface because positive and negative transitions of signal x
are acknowledged only by output signal z. This insertion scheme is valid.
2.3 Basic definitions about Boolean Functions and Relations
An important part of our decomposition method is finding appropriate candidates for characterization
(by means of Boolean covers) of the sets of states ERA (x+) and ERA (x\Gamma) for the inserted signal x. For
this, we need to reference here several important concepts about Boolean functions and relations [11].
An incompletely specified (scalar) Boolean function is a functional mapping
and '\Gamma' is a don't care value. The subsets of domain B n in which F holds the 0, 1
and don't care value are respectively called the OFF-set , ON-set and DC-set . F is completely specified
if its DC-set is empty. We shall further always assume that F is a completely specified Boolean function
unless said otherwise specifically.
be a Boolean function of n Boolean variables. The set is
called the support of the function F . In this paper we shall mostly be using the notion of true support ,
which is defined as follows. A point (i.e. binary vector of values) in the domain B n of a function F is
called a minterm. A variable x 2 X is essential for function F (or F is dependent on x) if there exist at
least two minterms v1; v2 different only in the value of x, such that F (v1) 6= F (v2). The set of essential
variables for a Boolean function F is called the true support of F and is denoted by sup(F ). It is clear
that for an arbitrary Boolean function its support may not be the same as the true support. E.g., for a
c the true support of F (X) is sup(F
a subset of X .
Let F (X) be a Boolean function F (X) with support g. The cofactor of F (X) with
respect to x i defined as F x i
respectively). The well-known Shannon expansion of a Boolean function F (X) is based on its cofactors:
. The Boolean difference, or Boolean derivative, of F (X) with respect to x
defined as ffiF
do
1: foreach non-input signal x do
solutions(x):=;;
2: foreach gate G 2 flatches, and2, or2g do
endfor
3: best H(x) := Best SIP candidate from solutions(x);
endfor
4: if foreach x, best H(x) is implementable
or foreach x, best H(x) is empty then exit loop;
5: Let H be the most complex best H(x);
Insert new signal z implementing H and derive new SG;
forever
7: Library matching;
Figure
Algorithm for logic decomposition and technology mapping.
A function F
or F x i
under
ordering In the former case
it is called positive unate in x i , in the latter
case negative unate in x i . A function that is not unate in x i is called binate in x i . A function is
(positive/negative) unate if it is (positive/negative) unate in all support variables. Otherwise it is binate.
For example, the function positive unate in variable a because F a
For an incompletely specified function F (X) with a DC-set, let us define the DC function FDC :
We will say that a function e
F is an implementation of F if
Boolean relation is a relation between Boolean spaces [2, 11]; it can be seen as a generalization of a
Boolean function, where a point in the domain B n can be associated with several points in the codomain.
More formally, a Boolean relation R is R ' B n \Theta f0; 1g m . Sometimes, we shall also use the "\Gamma" symbol
as a shorthand in denoting elements in the codomain vector, e.g. 10 and 00 will be represented as one
vector \Gamma0. Boolean relations play an important role in multi-level logic synthesis [11], and we shall use
them in our decomposition method.
Consider a set of Boolean functions with the same domain. Let R '
be a Boolean relation with the same domain as functions from H. We will say that H is compatible
with R if for every point v in the domain of R the vector of values (v; H 1 is an
element of R. An example of compatible functions will be given in Section 4.
3 Overview of the method
In this section we describe our proposed method for sequential decomposition of speed-independent
circuits aimed at technology mapping. It consists of three main steps:
1. Synthesis via decomposition based on Boolean relations;
2. Signal insertion and generation of a new SG;
3. Library matching
The first two steps are iterated until all functions are decomposed into implementable gates or no
further progress can be made. Each time a new signal is inserted (step 2), resynthesis is performed for all
output signals (step 1). Finally, step 3 collapses decomposed gates and matches them with library gates.
The pseudo-code for the technology mapping algorithm is given in Figure 6.
By using a speed-independent initial SG specification, a complex gate implementation of the Boolean
function for each SG signal is guaranteed to be speed-independent. Unfortunately this gate may be too
large to be implemented in a semi-custom library or even in full custom CMOS, e.g. because it requires
too many stacked transistors. The goal of the proposed method is to break this gate starting from its
output by using sequential (if its function is self-dependent, i.e. it has internal feedback) or combinational
gates.
Given a vector X of SG signals and given one non-input signal y 2 X (in general the function F (X) for
y may be self-dependent), we try to decompose the function F (X) into (line 2 of algorithm in Figure 6):
ffl a combinational or sequential gate with function G(Z; y), where Z is a vector of newly introduced
signals,
ffl a vector of combinational 2 functions H(X) for signals Z,
so that G(H(X)) implements F (X). Moreover, we require the newly introduced signals to be speed-
independent (line 3). We are careful not to introduce any unnecessary fanouts due to non-local ac-
knowledgment, since they would hinder successive area recovery by gate merging (when allowed by the
library).
The problem of representing the flexibility in the choice of the H functions has been explored, in the
context of combinational logic minimization, by [19] among others. Here we extend its formulation to
cover also sequential gates (in Sections 4.1 and 4.3). This is essential in order to overcome the limitations
of previous methods for speed-independent circuit synthesis that were based on a specific architecture.
Now we are able to use a broad range of sequential elements, like set and reset dominant SR latches,
transparent D latches, and so on. We believe that overcoming this limitation of previous methods (that
could only use C elements and dual-rail SR-latches) is one of the major strengths of this work. Apart
from dramatically improving some experimental results, it allows one to use a "generic" standard-cell
library (that generally includes SR and D latches, but not C elements) without the need to design and
characterize any new asynchronous-specific gates.
The algorithm proceeds as follows. We start from an SG and derive a logic function for all its non-input
signals (line 1). We then perform an implementability check for each such function as a library
gate. The largest non-implementable function is selected for decomposition. In order to limit the search
space, we currently try as candidates for G (line 2):
ffl all the sequential elements in the library (assumed to have two inputs at most, again in order to
limit the search space),
ffl two-input AND, OR gates with all possible input inversions.
The flexibility in the choice of functions represented as a Boolean relation, that represents
the solution space of F described in Section 4.1.
The set of function pairs compatible with the Boolean relation is then checked for speed-
independence (line 3), as described in Section 2.2. This additional requirement has forced us to implement
a new Boolean relation minimizer, that returns all compatible functions , as outlined in Section 5.1. If
both are not speed-independent, the pair is immediately rejected.
Then, both H 1 and H 2 are checked for approximate (as discussed above) implementability in the
library, in increasing order of estimated cost. We have two cases:
1. both are speed-independent and implementable: in this case the decomposition is accepted,
2. otherwise, the most complex implementable H i is selected, and the other one is merged with G.
2 The restriction that H(X) be combinational will be partially lifted in Section4.3.
The latter is a heuristic technique aimed at keeping the decomposition balanced. Note that at this stage
we can also implement H 1 or H 2 as a sequential gate if the sufficient conditions described in Section 4.3
are met.
The procedure is iterated as long as there is progress or until everything has been decomposed (line
4). Each time a new function H i is selected to be implemented as a new signal, it is inserted into the SG
(line and resynthesis is performed in the next iteration.
The incompleteness of the method is essentially due to the greedy heuristic search that accepts
the smallest implementable or non-implementable but speed-independent solution. We believe that an
exhaustive enumeration with backtracking would be complete even for non-autonomous circuits, by a
relatively straightforward extension of the results in [17].
At the end, we perform a Boolean matching step ([10]) to recover area and delay (line 7). This step
can merge together the simple 2-input combinational gates that we have (conservatively) used in the
decomposition into a larger library gate. It is guaranteed not to introduce any hazards if the matched
gates are atomic.
4 Logic decomposition using Boolean relations
4.1 Specifying permissible decompositions with BRs
In this paper we apply BRs to the following problem.
Given an incompletely specified Boolean function F (X) for signal y, y 2 X, decompose it into two-levels
such that G(H(X); y) implements F (X) and functions G and H have a simpler
implementation than F (any such H will be called permissible).
Note that the first-level function (X)g is a multi-output logic function, specifying
the behavior of internal nodes of the decomposition,
The final goal is a function decomposition to a form that is easily mappable to a given library. Hence
only functions available in the library are selected as candidates for G. Then at each step of decomposition
a small mappable piece (function G) is cut from the potentially complex and unmappable function F .
For a selected G all permissible implementations of function H are specified with a BR and then via
minimization of BRs a few best compatible functions are obtained. All of them are verified for speed-
independence by checking SIP-sets. The one which is speed-independent and has the best estimated cost
is selected.
Since the support of function F can include the output variable y, it can specify sequential behavior.
In the most general case we perform two-level sequential decomposition such that both function G and
function H can be sequential, i.e., contain their own output variables in the supports. The second level
of the decomposition is made sequential by selecting a latch from the library as a candidate gate, G. The
technique for deriving a sequential solution for the first level H is described in Section 4.3.
We next show by example how all permissible implementations of decomposition can be expressed
with BRs.
Example 4.1 Consider the STG in Figure 7,a, whose SG appears in Figure 8,a. Signals a, c and d
are inputs and y is an output. A possible implementation of the logic function for y is F (a; c; d;
us decompose this function using as G a reset-dominant Rs-latch represented by
the equation Figure 7,b). At the first step we specify the permissible
implementations for the first level functions by using the BR specified in the table
in
Figure
8,b. Consider, for example, vector a; c; d; It is easy to check that F(0; 0; 0;
Hence, for vector 0000 the table specifies that (R; implementation
of R and S must keep for this input vector either 1 at R or 0 at S, since these are the necessary and
3 For simplicity we consider the decomposition problem for a single-output binary function F , although generalization for
the multi-output and multi-valued functions is straightforward.
y
d D
Rs
c
dc+yc
Rs
a
Rs
Rs
d-
a-
c-
a-
y
y
G
R
c
d
y
a)
c)
d) e)
R
y
d
c
a S
R
c
a d
Figure
7: Sequential decomposition for function d)
Region C-element D-latch Rs Sr AND OR
QR(y\Gamma) f0\Gamma; \Gamma0g f0\Gamma; \Gamma0g f1\Gamma; \Gamma0g 0\Gamma f0\Gamma; \Gamma0g 00
unreachable \Gamma\Gamma \Gamma\Gamma \Gamma\Gamma \Gamma\Gamma \Gamma\Gamma \Gamma\Gamma
Table
1: Boolean relations for different gates
sufficient conditions for the Rs-latch to keep the value 0 at the output y, as required by the specification.
On the other hand, only one solution possible for the input vector 1100 which corresponds
to setting the output of the Rs-latch to 1. The Boolean relation solver will find, among others, the two
solutions illustrated in Figure 7,c,d: (1) a and (2) acd. Any of these
solutions can be chosen depending on the cost function.
Table
specifies compatible values of BRs for different types of gates: a C-element, a D-latch, a
reset-dominant Rs-latch, a set-dominant Sr-latch, a two input AND gate and a two input
states of an SG are partitioned into four subsets, ER(y+); QR(y+);ER(y \Gamma); and QR(y \Gamma), with respect
to signal y with function F (X) for which decomposition is performed. All states that are not reachable
in the SG form a DC-set for the BR. E.g., for each state, s, from ER(y+) only one compatible solution,
11, is allowed for input functions H of a C-element. This is because the output of a C-element in
all states, s 2 ER(y+) is at 0 and F these conditions the combination 11 is the only
possible input combination that implies 1 at the output of a C-element. On the other hand, for each
state s 2 QR(y+), the output hence it is enough to keep at least one input of a
C-element in 1. This is expressed by values f1\Gamma; \Gamma1g in the second line of the table. Similarly all other
compatible values are derived.
4.2 Functional representation of Boolean relations
Given an SG satisfying CSC requirement, each output signal y 2 X is associated with a unique incompletely
specified function F (X), whose DC-set represents the set of unreachable states. F (X) can
be represented by three completely specified functions, denoted ON(y)(X), OFF (y)(X) and DC(y)(X)
representing the ON-, OFF-, and DC-set of F (X), such that they are pairwise disjoint and their union
is a tautology.
c-
c-
d-
a-
c-
c- d+
a-
a-
a-
ON(y)
a,c,d,y
(a)
acdy F R S
(b)
Figure
8: (a) State graph, (b) Decomposition of signal y by an RS latch
Let a generic n-input gate be represented by a Boolean equation
are the inputs of the gate, and q is its output 4 . The gate is sequential if q belongs to the true support of
G(Z; q).
We now give the characteristic function of the Boolean relation for the implementation of F (X) with
gate G. This characteristic function represents all permissible implementations of z
allow F to be decomposed by G.
Given characteristic function (1), the corresponding table describing Boolean relation can be derived
using cofactors. For each minterm m with support in X , the cofactor BR(y)m gives the characteristic
function of all compatible values for z (see example below).
Finding a decomposition of F with gate G is reduced to finding a set of n functions
Example 4.2 (Example 4.1 continued.) The SG shown in Figure 8.a corresponds to the STG in Figure
7. Let us consider how the implementation of signal y with a reset-dominant Rs latch can be expressed
using the characteristic function of BR. Recall that the table shown in Figure 8.b represents the function
F (a; c; d; and the permissible values for the inputs R and S of the Rs latch. The ON-,
OFF-, and DC-sets of function F (a; c; d; y) are defined by the following completely specified functions:
4 In the context of Boolean equations representing gates we shall liberally use the "=" sign to denote "assignment", rather
than mathematical equality. Hence q in the left-hand side of this equation stands for the next value of signal q while the
one in the right-hand side corresponds to its previous value.
The set of permissible implementations for R and S is characterized by the following characteristic
function of the BR specified in the table. It can be obtained using equation 1 by substituting expressions
for ON(y); OFF (y); DC(y), and the function of an Rs-latch, R(S
BR(y)(a; c; d; (R
This function has value 1 for all combinations represented in the table and value 0 for all combinations
that are not in the table (e.g., for (a; c; d; For example, the set of compatible values
for given by the cofactor
which correspond to the terms 1\Gamma and \Gamma0 given for the Boolean relation for that minterm.
Two possible solutions for the equation BR(y)(a; c; d; corresponding to Figure 7,c,d are:
4.3 Two-level sequential decomposition
Accurate estimation of the cost of each solution produced by the Boolean relation minimizer is essential
in order to ensure the quality of the final result. The minimizer itself can only handle combinational logic,
but often (as shown below) the best solution can be obtained by replacing a combinational gate with a
sequential one. This section discusses some heuristic techniques that can be used to identify when such a
replacement is possible without altering the asynchronous circuit behavior, and without undergoing the
cost of a full-blown sequential optimization step. Let us consider our example again.
Example 4.3 (Example 4.1 continued.) Let us assume that the considered library contains three-input
AND, OR gates and Rs-, Sr- and D-latches. Implementation (1) of signal y by an Rs-latch with
inputs R=cd and S=acd matches the library and requires two AND gates (one with two and one with
three inputs) and one Rs-latch. The implementation (2) of y by an Rs-latch with inputs R=cd+ y c and
S=a would be rejected, as it requires a complex AND-OR gate which is not in the library. However, when
input y in the function cd replaced by signal R, the output behavior of R will not change, i.e.
function R=cd+ y c can be safely replaced by R=cd+Rc. The latter equation corresponds to the function
of a D-latch and gives the valid implementation shown in Figure 7,e.
Our technique to improve the precision of the cost estimation step, by partially considering sequential
gates, is as follows:
1. Produce permissible functions z via the minimization of Boolean
relations (z 1 and z 2 are always combinational as z 1 ; z 2 62 X).
2. Estimate the complexity of H 1 and
matches the library then Complexity = cost of the gate else Complexity = literal count
3. Estimate the possible simplification of H 1 and H 2 due to adding signals z 1 and z 2 to their supports,
i.e. estimate the complexity of the new pair fH 0
2 g of permissible functions z
4. Choose the best complexity between H 1
Let us consider the task of determining H 0
2 as in step 3. Let A be an SG encoded by variables
from set V and let z = H(X; y), such that X ' V; y 2 V , be an equation for the new variable z
which is to be inserted in A. The resulting SG is denoted A 0 =Ins(A; z=H(X; y)) (sometimes we will
simply A 0 =Ins(A; z) or A 0 =Ins(A; z one signal is inserted).
A solution for Step 3 of the above procedure can be obtained by minimizing functions for signals
z 1 and z 2 in an SG A However this is rather inefficient because the creation of SG
A 0 is computationally expensive. Hence instead of looking for an exact estimation of complexity for
signals z 1 and z 2 we will rely on a heuristic solution, following the ideas on input resubstitution presented
in Example 4.3. For computational efficiency, the formal conditions on input resubstitution should be
formulated in terms of an original SG A rather than in terms of the SG A 0 obtained after the insertion of
new signals 5 .
Lemma 4.1 Let Boolean function H(X; y) implement the inserted signal z and be positive (negative)
unate in y. Let H 0 (X; z) be the function obtained from H(X; y) by replacing each literal y (or y) by
literal z. The SGs A 0 =Ins(A; z=H(X; y)) and A 00 =Ins(A; z=H 0 (X; z)) are isomorphic iff the following
condition is satisfied:
where S is the characteristic function describing the set of states (ERA (z+) [ ERA (z \Gamma)) in A.
Informally Lemma 4.1 states that resubstitution of input y by z is permissible if in all states where
the value of function H(X; y) depends on y, the inserted signal z has a stable value.
Example 4.4 (Example 4.1 continued.) Let input R of the RS-latch be implemented as cd
Figure
7,d). The ON-set of function H=cd shown by the dashed line in Figure 8,a. The input
border of H is the set of states by which its ON-set is entered in the original SG A, i.e.
By similar consideration we have that f0100g. These input borders satisfy the SIP conditions
and hence IB(H) can be taken as ERA (R+), while ERA (R\Gamma) must be expanded beyond IB(H) by state
1100 for not to delay the input transition a+ (ERA (R\Gamma) = f0100; 1100g).
The set of states where the value of function H essentially depends on signal y is given by the function
negative unate in y and cube ac has no intersection with ERA (R+)[ERA (R\Gamma).
Therefore by the condition of Lemma 4.1 literal y can be replaced by literal R, thus producing a new
permissible function R=cd
This result can be generalized for binate functions, as follows.
Lemma 4.2 Let Boolean function H(X; y) implement the inserted signal z and be binate in y. Function
H can be represented as H(X; are Boolean functions not depending
on y. Let H 0 (X; are
isomorphic iff the following conditions are satisfied:
are characteristic Boolean functions describing sets of states ERA (z+) [ ERA (z \Gamma) and
ERA (z+) in A, respectively.
The proof is given in the Appendix.
The conditions of Lemma 4.2 can be efficiently checked within our BDD-based framework. They
require to check two tautologies involving functions defined over the states of the original SG A. This
heuristic solution is a trade-off between computational efficiency and optimality. Even though the estimation
is still not completely exact (the exact solution requires the creation of A 0 =Ins(A; z)), it allows
us to discover and possibly use the implementation of Figure 7,e.
5 Note that this heuristic estimation covers only the cases when one of the input signals for a combinational permissible
is replaced by the feedback z i
from the output of H i
itself. Other cases can also be investigated, but checking
them would be too complex.
5 Implementation aspects
The method for logic decomposition presented in the previous section has been implemented in a synthesis
tool for speed-independent circuits. The main purpose of such implementation was to evaluate the
potential improvements that could be obtained in the synthesis of speed-independent circuits by using a
Boolean-relation-based decomposition approach. Efficiency of the current implementation was considered
to be a secondary goal at this stage of the research.
5.1 Solving Boolean relations
In the overall approach, it is required to solve BRs for each output signal and for each gate and latch
used for decomposition. Furthermore, for each signal and for each gate, several solutions are desirable in
order to increase the chances to find SIP functions.
Previous approaches to solve BRs [2, 18] do not satisfy the needs of our synthesis method, since (1)
they minimize the number of terms of a multiple-output function and (2) they deliver (without significant
modifications to the algorithms and their implementation) only one solution for each BR. In our case we
need to obtain several compatible solutions with the primary goal of minimizing the complexity of each
function individually . Term sharing is not significant because two-level decomposition of a function is not
speed-independent in general, and hence each minimized function must be treated as an atomic object.
Sharing can be exploited, on the other hand, when re-synthesizing the circuit after insertion of each new
signal. For this reason we devised a heuristic approach to solve BRs. We next briefly sketch it.
Given a BR BR(y)(X; Z), each function H i for z i is individually minimized by assuming that all other
functions will be defined in such a way that H(X) will be a compatible solution for BR. In
general, an incompatible solution may be generated when combining all H i 's. Taking the example of
Figure
8, an individual minimization of R and S could generate the solution
Next, a minterm with incompatible values is selected, e.g. -
d-y for which but only the
compatible values 1\Gamma or \Gamma0 are acceptable. New BRs are derived by freezing different compatible values
for the selected minterm. In this case, two new BRs will be produced with the values 1\Gamma and \Gamma0,
respectively for the minterm - a-c -
d-y. Next, each BR is again minimized individually for each output
function and new minterms are frozen until a compatible solution is obtained.
This approach generates a tree of BRs to be solved. This provides a way of obtaining several compatible
solutions for the same BR. However, the exploration may become prohibitively expensive if the search tree
is not pruned. In our implementation, a branch-and-bound-like pruning strategy has been incorporated
for such purpose. Still, the time required by the BR solver dominates the computational cost of the
overall method in our current implementation. Ongoing research on solving BRs for our framework
is being carried out. We believe that the fact that we pursue to minimize functions individually, i.e.
without caring about term sharing among different output functions, and that we only deal with 2-output
decompositions, may be crucial to derive algorithms much more efficient than the existing approaches.
5.2 Selection of the best decomposition
Once a set of compatible solutions has been generated for each output signal, the best candidate is
selected according to the following criteria (in priority
1. At least one of the decomposed functions must be speed-independent.
2. The acknowledgment of the decomposed functions must not increase the complexity of the implementation
of other signals (see section 5.3).
3. Solutions in which all decomposable functions are implementable in the library are preferred.
4. Solutions in which the complexity of the largest non-implementable function is minimized are
preferred. This criterion helps to balance the complexity of the decomposed functions and derive
balanced tree-like structures rather than linear ones 6 .
5. The estimated savings obtained by sharing a function for the implementation of several output
signals is also considered as a second order priority criterion.
Among the best candidate solutions for all output signals, the function with the largest complexity,
i.e. the farthest from implementability, is selected to be implemented as a new output signal of the SG.
The complexity of a function is calculated as the number of literals in factored form. In case it is
a sequential function and it matches some of the latches of the gate library, the implementation cost is
directly obtained from the information provided by the library.
5.3 Signal acknowledgment and insertion
For each function delivered by the BR solver, an efficient SIP insertion must be found. This reduces
to finding a partition fERA (x+); QRA (x+); ERA (x\Gamma); QRA (x\Gamma)g of the SG A such that ERA (x+) and
ERA (x\Gamma) (that are restricted to be SIP-sets, Section 2.2) become the positive and negative ERs of the
new signal x. QRA (x+) and QRA (x\Gamma) stand for the corresponding state sets where x will be stable and
equal to 1 and 0, respectively.
In general, each function may have several ERA (x+) and ERA (x\Gamma) sets acceptable as ERs. Each one
corresponds to a signal insertion with different acknowledging outputs signals for its transitions. In our
approach, we perform a heuristic exploration seeking for different ERA (x+) and ERA (x\Gamma) sets for each
function. We finally select one according to the following criteria:
ffl Sets that are only acknowledged by the signal that is being decomposed (i.e. local acknowledgment)
are preferred.
ffl If no set with local acknowledgment is found, the one with least acknowledgment cost is selected.
The selection of the ERA (x+) and ERA (x\Gamma) sets is done independently. The cost of acknowledgment
is estimated by considering the influence of the inserted signal x on the implementability of the other
signals. The cost can be either increased or decreased depending on how ERA (x+) and ERA (x\Gamma) are
selected, and is calculated by incrementally deriving the new SG after signal insertion.
As an example consider the SG of Figure 5,c and the insertion of a new signal x for the function
d. A valid SIP set for ERA (x+) would be the set of states f1100; 0100; 1110; 0110g, where the
state f1100g is the input border for the inserted function. A valid SIP set for ERA (x\Gamma) would be the
set of states f1001; 0001g. With such insertion, ERA (x+) will be acknowledged by the transition z+ and
ERA (x\Gamma) by z \Gamma. However, this insertion is not unique. For the sake of simplicity, let us assume that a
and d are also output signals. Then an insertion with ERA would be also valid. In that
case, the transition x+ would be acknowledged by the transitions a\Gamma and d+.
5.4 Library mapping
The logic decomposition of the non-input signals is completed by a technology mapping step aimed
at recovering area and delay based on a technology-dependent library of gates. These reductions are
achieved by collapsing small fanin gates into complex gates, provided that the gates are available in the
library. The collapsing process is based on the Boolean matching techniques proposed by Mailhot et al.
[10], adapted to the existence of asynchronous memory elements and combinational feedback in speed-
independent circuits. The overall technology mapping process has been efficiently implemented based on
the utilization of BDDs.
6 Different criteria, of course, may be used when we also consider the delay of the resulting implementation, since then
keeping late arriving signals close to the output is generally useful and can require unbalanced trees.
6 Experimental results
6.1 Results in decomposition and technology mapping
The method for logic decomposition presented in the previous sections has been implemented and applied
to a set of benchmarks. The results are shown in Table 2.
Circuit signals literals/latches CPU Area non-SI Area SI
I/O old new (secs) lib 1 lib 2 best 2 inp map best
chu150 3/3 14/2 10/1
converta 2/3 12/3 16/4 252 352 312 312 338 296 296
drs
ebergen 2/3 20/3 6/2 4 184 160 160 160 144 144
hazard 2/2 12/2 0/2 1 144 120 120 104 104 104
nak-pa 4/6 20/4 18/2 441 256 248 248 250 344 250
nowick 3/3 16/1 16/1 170 248 248 248 232 256 232
sbuf-ram-write 5/7 22/6 20/2 696 296 296 296 360 338 338
trimos-send 3/6 36/8 14/10 2071 576 480 480 786 684 684
Total 252/52 180/37 4288 3984 3976 4180 4662 3982
Table
2: Experimental results.
The columns "literals/latches" report the complexity of the circuits derived after logic decomposition
into 2-input gates. The results obtained by the method presented in this paper ("new") are significantly
better than those obtained by the method presented in [7] ("old"). Note that the library used for the
"new" experiments was deliberately restricted to D, Sr and Rs latches (i.e. without C-elements, since
they are generally not part of standard cell libraries). This improvement is mainly achieved because of
two reasons:
ffl The superiority of Boolean methods versus algebraic methods for logic decomposition.
ffl The intensive use of different types of latches to implement sequential functions compared to the
C-element-based implementation in [7].
However, the improved results obtained by using Boolean methods are paid in terms of a significant
increase in terms of CPU time. This is the reason why some of the examples presented in [7] have not
been decomposed. We are currently exploring ways to alleviate this problem by finding new heuristics to
solve Boolean relations efficiently.
6.2 The cost of speed independence
The second part of Table 2 is an attempt to evaluate the cost of implementing an asynchronous specification
as a speed-independent circuit. The experiments have been done as follows. For each bench-
mark, the following script has been run in SIS, using the library asynch.genlib: astg to f; source
script.rugged; map. The resulting netlists could be considered a lower bound on the area of the circuit
regardless of its hazardous behavior (i.e. the circuit only implements the correct function for each output
signal, without regard to hazards). script.rugged is the best known general-purpose optimization script
for combinational logic. The columns labeled "lib 1" and "lib 2" refer to two different libraries, one biased
towards using latches instead of combinational feedback 7 , the other one without any such bias.
The columns labeled SI report the results obtained by the method proposed in this paper. Two
decomposition strategies have been experimented before mapping the circuit onto the library:
ffl Decompose all gates into 2-input gates (2 inp).
ffl Decompose only those gates that are not directly mappable into gates of the library (map).
In both cases, decomposition and mapping preserve speed independence, since we do not use gates (such
as MUXes) that may have a hazardous behavior when the select input changes. There is no clear evidence
that performing an aggressive decomposition into 2-input gates is always the best approach for technology
mapping. The insertion of multiple-fanout signals offers opportunities to share logic in the circuit, but
also precludes the mapper from taking advantage of the flexibility of mapping tree-like structures. This
trade-off must be better explored in forthcoming work.
Looking at the best results for non-SI/SI implementations, we can conclude that preserving speed
independence does not involve a significant overhead. In our experiments we have shown that the reported
area is similar. Some benchmarks were even more efficiently implemented by using the SI-preserving
decomposition. We impute these improvements to the efficient mapping of functions into latches by
using Boolean relations.
7 Conclusions and future work
In this paper we have shown a new solution to the problem of multi-level logic synthesis and technology
mapping for asynchronous speed-independent circuits. The method consists of three major parts. Part 1
uses Boolean relations to compute a set of candidates for logic decomposition of the initial complex gate
circuit implementation. Thus each complex gate F is iteratively split into a two-input combinational
or sequential gate G available in the library and two gates H 1 and H 2 that are simpler than F , while
preserving the original behavior and speed-independence of the circuit. The best candidates for H 1 and
are selected for the next step, providing the lowest cost in terms of implementability and new signal
insertion overhead. Part 2 of the method performs the actual insertion of new signals for H 1 and/or H 2
into the state graph specification, and re-synthesizes logic from the latter. Thus parts 1 and 2 are applied
to each complex gate that cannot be mapped into the library. Finally, Part 3 does library matching to
recover area and delay. This step can collapse into a larger library gate the simple 2-input combinational
gates (denoted above by G) that have been (conservatively) used in decomposing complex gates. No
violations of speed-independence can arise if the matched gates are atomic.
This method improves significantly over previously known techniques [1, 8, 7]. This is due to the
significantly larger optimization space exploited by using (1) Boolean relations for decomposition and (2)
a broader class of latches 8 . Furthermore, the ability to implement sequential functions with SR and D
latches significantly improves the practicality of the method. Indeed one should not completely rely, as
earlier methods did, on the availability of C-elements in a conventional library.
In the future we are planning to improve the Boolean relation solution algorithm, aimed at finding
a set of optimal functions compatible with a Boolean relation. This is essential in order to improve the
CPU times and synthesize successfully more complex specifications.
--R
Automatic gate-level synthesis of speed-independent circuits
An exact minimizer for boolean relations.
General conditions for the decomposition of state holding elements.
Synthesis of Self-timed VLSI Circuits from Graph-theoretic Specifications
Complete state encoding based on the theory of regions.
Concurrent Hardware.
Technology mapping for speed- independent circuits: decomposition and resynthesis
Basic gate implementation of speed-independent circuits
Algorithms for synthesis and testing of asynchronous circuits.
Algorithms for technology mapping based on binary decision diagrams and on boolean operations.
Synthesis and Optimization of Digital Circuits.
A theory of asynchronous circuits.
Structural methods for the synthesis of speed-independent circuits
Decomposition methods for library binding of speed-independent asynchronous designs
Automatic technology mapping for generalized fundamental mode asynchronous designs.
A generalized state assignment theory for transformations on Signal Transition Graphs.
Heuristic minimization of multiple-valued relations
Permissible functions for multioutput components in combinational logic optimization.
--TR
Automatic technology mapping for generalized fundamental-mode asynchronous designs
Decomposition methods for library binding of speed-independent asynchronous designs
Basic gate implementation of speed-independent circuits
A generalized state assignment theory for transformation on signal transition graphs
Automatic gate-level synthesis of speed-independent circuits
Synthesis and Optimization of Digital Circuits
Algorithms for Synthesis and Testing of Asynchronous Circuits
General Conditions for the Decomposition of State-Holding Elements
Complete State Encoding Based on the Theory of Regions
Technology Mapping for Speed-Independent Circuits
Structural Methods for the Synthesis of Speed-Independent Circuits
--CTR
Jordi Cortadella , Michael Kishinevsky , Alex Kondratyev , Luciano Lavagno , Alexander Taubin , Alex Yakovlev, Lazy transition systems: application to timing optimization of asynchronous circuits, Proceedings of the 1998 IEEE/ACM international conference on Computer-aided design, p.324-331, November 08-12, 1998, San Jose, California, United States
Michael Kishinevsky , Jordi Cortadella , Alex Kondratyev, Asynchronous interface specification, analysis and synthesis, Proceedings of the 35th annual conference on Design automation, p.2-7, June 15-19, 1998, San Francisco, California, United States | technology mapping;two-input sequential gate;complex gates;decomposed logic sharing;netlist;speed-independent circuits;two-input combinational gate;signal insertion;optimization;library matching;boolean decomposition;boolean relations;circuit CAD;logic resynthesis;design library;logic decomposition |
266552 | A deductive technique for diagnosis of bridging faults. | A deductive technique is presented that uses voltage testing for the diagnosis of single bridging faults between two gate input or output lines and is applicable to combinational or full-scan sequential circuits. For defects in this class of faults the method is accurate by construction while making no assumptions about the logic-level wired-AND/OR behavior. A path-trace procedure starting from failing outputs deduces potential lines associated with the bridge and eliminates certain faults. The information obtained from the path-trace from failing outputs is combined using an intersection graph to make further deductions. The intersection graph implicitly represents all candidate faults, thereby obviating the need to enumerate faults and hence allowing the exploration of the space of all faults. The above procedures are performed dynamically and a reduced intersection graph is maintained to reduce memory and simulation time. No dictionary or fault simulation is required. Results are provided for all large ISCAS89 benchmark circuits. For the largest benchmark circuit, the procedure reduces the space of all bridging faults, which is of the order of 10^9 to a few hundred faults on the average in about 30 seconds of execution time. | Introduction
A bridging fault [1] between two lines A and B in a circuit occurs
when the two lines are unintentionally shorted. When the lines A
and B have different logic values, the gates driving the lines will be
engaged in a drive fight (logic contention). Depending on the gates
driving the lines A and B, their input values, and the resistance of
the bridge, the bridged lines can have intermediate voltage values
VA and VB (not well defined logic values of 1 or 0). This is interpreted
by the logic that fans out from the bridge as shown in the
shaded region in Figure 1 (a).
The logic gates downstream from the bridged nodes can have variable
input logic thresholds. Thus the intermediate voltage at a
bridged node may be interpreted differently by different gates. This
is known as the Byzantine Generals Problem [2, 3] and is illustrated
in
Figure
(b). The voltage at the node A ( VA ) is interpreted as a
faulty value (0) by gate d and a good value (1) by gate c. Thus, different
branches from a single fanout stem can have different logic
values.
The feasibility of any diagnosis scheme can be evaluated using the
parameters: accuracy, precision, storage requirements and computational
complexity. Accurate simulation of bridging faults [4, 3]
is computationally expensive. Thus, it may not be feasible to perform
bridging fault simulation during diagnosis. Further, the space
of all bridging faults is extremely large. For example, for the large
ISCAS89 benchmark circuits, it is of the order of 10 9 faults.
Several techniques have been proposed for the diagnosis of bridging
faults in combinational circuits using voltage testing. Mill-
This research was supported in part by Defense Advanced Research
Projects Agency (DARPA) under contract DABT 63-96-C-0069, by the
Semiconductor Research Corporation (SRC) under grant 95-DP-109, by
the Office of Naval Research (ONR) under grant N00014-95-1-1049, and
by an equipment grant from Hewlett-Packard.
man, McCluskey and Acken [5] presented an approach to diagnose
bridging faults using stuck-at dictionaries. Chess et al. [6],
and Lavo et al. [7] improved on this technique. These techniques
enumerate bridging faults and are hence constrained to using a reduced
set of bridging faults extracted from the layout. Further, the
construction and storage requirements of fault dictionaries may be
prohibitive. Chakravarty and Gong [8] describe a voltage-based
algorithm that uses the wired-AND (wired-OR) model and stuck-at
fault dictionaries. The wired-AND and wired-OR models work
only for technologies for which one logic value is always more
strongly driven than the other.
x
x
x
(a)
Primary Outputs
(b)
effect propagation
Primary Inputs
Threshold: 2.6 V
Threshold: 2.4 VBridge
a A
d
c
A
Bridging fault
Threshold: 2.3 V
Figure
1: Bridging Fault Effect Propagation
In this paper we present a deductive technique that does not require
fault dictionaries and does not explicitly simulate faults, either
stuck-at or bridging. Further, no model such as wired-AND
or wired-OR is assumed at the logic-level. The class of bridging
faults considered are all single bridging faults between two lines in
the circuit. The lines could be gate outputs, gate inputs, or primary
inputs. For defects in this class of faults, the method is accurate in
that the defect is guaranteed to be in the candidate list. In the fol-
lowing, a failing vector and a failing output refer to a vector and a
primary output that fail the test on a tester (not during simulation).
The deductive technique consists of two deductive procedures. The
first is a path-trace procedure that starts from failing outputs and
uses the logic values obtained by the logic simulation of the good
circuit for each failing vector. This is used to deduce lines potentially
associated with the bridging faults. The second procedure
is an intersection graph constructed from the information obtained
through path-tracing from failing outputs. The path-trace and the
intersection graph are constructed and processed dynamically during
diagnosis. The intersection graph implicitly represents all candidate
bridging faults under consideration, thereby allowing processing
of the entire space of all bridging faults in an implicit man-
ner. During diagnosis, a reduced version of the graph is maintained
that retains all diagnostic information. This reduces memory usage
and simulation time. Since the technique uses only logic simulation
and does not explicitly simulate faults, it is fast. The technique
outputs a list of candidate faults. If the resolution (the size of the
candidate list) is adequate, the diagnosis is complete. Otherwise,
either the candidate list can be simulated with a bridging fault simulator
or other techniques [5, 6, 7, 8] can be used to improve the
resolution.
2 The Path-Trace Procedure
The path-trace procedure deduces lines in the circuit that are potentially
associated with a bridging fault. A potential source of error
with respect to a failing output is defined as follows.
potential source of error, with respect to a failing
output, is a line in the circuit from which there exists a sensitized
path to that failing output on the application of the corresponding
failing vector.
Note that there is a distinction between potential sources of error
and the actual source(s) of error associated with the defect. In the
following, the actual source(s) of error are simply referred to as
the source(s) of error. The path-trace procedure is similar to critical
path tracing [9], and star algorithm [10]. However, there are
some important differences. The above procedures were developed
for single stuck-at faults. Hence, only one line in the circuit is assumed
to be faulty. However, for bridging faults, due to the Byzantine
Generals Problem, both lines could be sources of fault effects.
Further, these effects may reconverge, leading to effects such as
multiple-path sensitization as shown in Figure 1 (b). The voltages
at lines A (VA ) and B (VB ) are both interpreted as faulty by gates
d and e, and the fault effect reconverges at gate f . However, the assumption
of a single bridging fault between two lines ensures that
at most two lines in the circuit can be sources of error.
The logic-value of a gate input is said to be controlling if it determines
the gate's output value regardless of other input values [11].
The path-trace procedure proceeds as follows. Start from a failing
output and process the lines of the circuit in a reverse topological
order up to the inputs. When a gate output is reached, observe
the input values. If all inputs have noncontrolling values, continue
the trace from all inputs. If one or more inputs have controlling
values, continue the trace from any one controlling input. When
a fanout branch is reached, continue tracing from the stem. The
choice in selecting the controlling value can be exploited, as will
be explained later.
We first consider the case of a single line being the source of error
for a failing output, and then consider the case where both lines of
a bridging fault are sources of error for a failing output. Consider a
single line being the source of errors on a failing vector. When reconvergent
fanout exists, the following situations could occur. In
Figure
2 (a), the effects of an error from the stem c propagate to
the output. However, if the paths have different parities, they will
cancel each other when they reconverge. This is referred to as self-
masking [9]. Figure 2 (b) shows an example of multiple path sensitization
[9]. The bold lines indicate error propagation. The error
from line c propagates through two paths before reconverging and
propagating to an output.
a0h
f
d
e
f
e
c
propagation
(a) error from stem c propagates (b) multiple-path sensitization from c
Figure
2: Reconvergent Fanout with Single Source of Error
Lemma 1 On any failing vector, the path-trace procedure includes
all potential sources of error with respect to the failing out-
puts, assuming there is a single source of error.
Proof: When the path-trace reaches a fanout branch, it continues
from the stem. Hence, if the stem were a source of error, it would
be included. If a gate has multiple controlling values on its inputs,
then fault effects can propagate through this gate only if there exists
a stem from which errors reconverge at this gate to collectively
change all the controlling values. When the path-trace reaches this
gate, it will continue along one of the lines having controlling val-
ues. Hence it will include the stem.
Lemma 1 can be interpreted as follows. If the defect causes a single
line to be faulty on some failing vector and this fault effect propagates
to some failing output, then the path-trace includes all lines
that are sensitized to that failing output. The path-trace procedure
is conservative with respect to single sources of error. Not all lines
in the path-trace may be potential sources of error. For example,
line h in Figure 2 (b) is not a potential source of error but would
be included in the path-trace. However, this conservative approach
is necessary when both lines of a bridging fault could be sources
of error with respect to some failing output. Note that for a single
source of error, the potential sources of error are the same as critical
lines [9] in the circuit. Next, we consider the case where both
lines of a bridging fault are sources of error on some failing vector.
If there exists at least one path between the lines of a bridging fault,
then the bridging fault creates one or more feedback loops. Such
a fault is referred to as a feedback bridging fault [11]. If no paths
exist between the lines of a bridging fault, then it is called a non-feedback
bridging fault. A feedback bridging fault may cause oscillations
to occur if the input vector creates a sensitized path from
one line of the bridging fault to the other and this path has odd inversion
parity. If such oscillations are detectable by the tester, then
they can be used as additional failing outputs for the path-trace pro-
cedure. The following Lemma, Theorem and Corollary are applicable
to both feedback and nonfeedback bridging faults. The symbol
A@B is used to represent a bridging fault.
Lemma 2 If a bridging fault A@B causes fault effect propagation
to an output due to reconvergence of bridging fault effects from
both lines of the bridging fault, then the path-trace procedure starting
from that failing output will include at least one of the lines of
the bridging fault.
Proof: At the reconvergent gate, there exist one or more controlling
input values. The path-trace continues from one of the lines
with controlling input value. Thus, one of the lines of the bridging
fault is covered by the path-trace.
A case of Lemma 2 is illustrated in Figure 3. The output of gate e
fails. Path-trace starts from this output and proceeds to the inputs.
Since gate e has two controlling inputs, the trace continues from
one of them. Node B, which is part of the bridging fault A@B, is
covered by the path-trace.
x
x
d
e
Bridging fault
A
EI
F
G
Node
Test
Vector PO
Primary
Output
Figure
3: Path-Trace and Node Set
2 The node set N ij is defined as the set of lines that lie
on the path-trace starting from failing output PO i under the application
of test-vector t j .
Theorem 1 If neither line A nor line B of a bridging fault A@B
is in a node set N ij , then the fault A@B could not have caused
output PO i to fail under test vector t j .
Proof: [By contradiction] Assume that the bridging fault A@B
caused an output PO i to fail on some test-vector t j . This implies
that there exists a sensitized path from A, or B, or the interaction of
fault effects from both A and B, to the primary output PO i under
the application of test-vector t j . If neither line A nor line B is in
then due to Lemmas 1 and 2, there exists no sensitized path to
PO i . This leads to a contradiction.
Corollary 1 If the defect is a single bridging fault A@B, then a
node set N ij must contain at least one of the lines A and B.
Proof: Follows directly from Theorem 1.
Note that Theorem 1 and Corollary 1 are conservative in that they
make no assumptions about the resistance of the bridging fault,
the gates feeding the bridging fault and their input values, and the
logic input thresholds of the gates downstream from the bridging
fault. The only assumption made is the presence of a single bridging
fault. The information from a group of node sets can be used
to make further deductions. This is performed using the concept of
an intersection graph.
3 The Intersection Graph and Its Processing
Given a group of node sets fN ij g the intersection graph is defined
as follows.
Definition 3 The intersection graph is a simple undirected graph
(no loops or multiple edges)
is a node setg and edge
or (j 6= l)) and
Figure
4 shows an intersection graph with 7 vertices. The corresponding
node sets are shown within the curly brackets. The intersection
graph has similar structure to the initialization graph proposed
by Chakravarty and Gong [8]. However, there are important
differences. The initialization graph is constructed using only
structural information while the intersection graph is constructed
using logic information exploited by the path-trace procedure. The
initialization graph is created statically once before diagnosis and
processed. However, the intersection graph is updated and processed
dynamically during diagnosis. A reduction procedure maintains
a reduced version of the graph without losing diagnostic in-
formation. The intersection graph has interesting structural properties
that are useful for performing deduction and for maintaining
reduced graphs to help reduce memory requirements and simulation
time.
G 2v
G
Figure
4: Intersection Graph and Its Properties
3.1 Structural Properties
Property 1 If GI has two vertices such that (v1 ; v2
. The
set of vertices
can be partitioned into three sets
0such that 8v
2 EI and (v 0
(v 0
Proof: Let N 1
ij and N 2
ij be the node sets corresponding to v1 and
v2 . From Corollary 1, N 1
ij and N 2
ij contain at least one of the lines
of the bridging fault A@B. Since (v1 ; v2
contains
only one of the lines A and B (say A). This implies that N 2
contains
the other line (B). Consider any arbitrary vertex
v2g. From Corollary 1 it follows that the node set corresponding
to v3 contains at least one of the lines A and B. Thus v3 is
adjacent to at least one of v1 and v2 . This implies that one of the following
three conditions holds: v3 is adjacent to v1 and not adjacent
to v2 ; v3 is adjacent to v2 and not adjacent to v1 ; v3 is adjacent to
both v1 and v2 .
0are the three sets obtained by Property
are
cliques.
Proof: From Corollary 1 and Property 1 it follows that the node
sets corresponding to every v i 2 V 0[ fv1g contain one and only
one of A and B (say A), while the node sets corresponding to every
contain the other line (B). Thus V 0
are cliques.
Figure
4 illustrates these properties. The intersection graphs can be
reduced while maintaining their properties. This reduces the number
of vertices and edges. Further, this also reduces the number of
node sets that need to be maintained and their sizes. Thus the reduction
process, which is done dynamically during diagnosis, can
help reduce memory and simulation time. The following
which follows from Property 2, is used in the reduction process.
Corollary 2 If the intersection graph is not a clique, and
are the subgraphs induced by V 0
fv1g and V 0[ fv2g respectively using Property 1, then all node
sets in V1 contain only one of the lines A or B of the bridge, while
all the node sets in V2 contain the other line.
3.2 Intersection Graph Processing
Corollary 2 is used by the procedure shown in Figure 5 to reduce
the intersection graph. An irreducible intersection graph is either
a complete graph or has the following characteristic. For each
EI , the V 0and V 0sets are empty. An example of the
reduction procedure is shown in Figure 6. The initial intersection
The intersection graph
Vertex corresponding to node set N ij
Possible to find v1 ; v2 since GI is not a clique
reduced
reduced
reduced
reduced edges incident on v 0and v 0
Figure
5: Procedure for Reducing the Intersection Graph
graph is reduced two times to obtain an irreducible graph with two
disjoint vertices. The dynamic processing of GI proceeds as fol-
lows. After each node set N ij is obtained, update GI . Reduce the
intersection graph until an irreducible graph is obtained. After all
node sets are processed, the irreducible intersection graph obtained
contains the candidate bridging faults. The candidate list (C) is obtained
from the irreducible graph (GIR ) using the following rules.
1. If GIR has two disconnected components, each of which has
one vertex, then let N 1
ij and N 2
ij be the node sets associated
with the two vertices.
g.
2. If GIR has one component that is not a complete graph, then
for each (v1 ;
ij and N 2
ij be the node sets associated
with v1 and v2 .
g.
3. If GIR is a complete graph, then let
g.
The reduced intersection graph is a compact way to implicitly represent
the space of candidate bridging faults. Further, the reduction
procedure prunes the space of candidate bridging faults without
losing diagnostic information. The defect is guaranteed to be
in the candidate list by construction. The candidate list will include
other faults which are logically equivalent or diagnostically equivalent
with respect to the test set. A better test set may distinguish
between some of these faults, thereby increasing the diagnostic res-
olution. WhenGIR is a complete graph, only one of the lines of the
bridge can be determined with certainty. This results in a partial
diagnosis. The experimental results indicate that partial diagnosis
does not occur often.
3.3 Implementation Issues and Complexity
The major operation performed during GI processing is its reduc-
tion. The basic operation needed by the reduction procedure is set
intersection. Further, the node sets need to be stored for each vertex
G 2G'v'v'G'
(a) Initial intersection graph
(b) Reduced intersection graph
Figure
An Example of the Intersection Graph Reduction
of GI . The node sets are represented as bit-vectors with a value of
1 indicating the presence of a node and a 0 indicating the absence
of one. If there are n lines in the circuit, the size of a node set is
dn=8e bytes. The bit vector representation allows for efficient set
intersection using the bitwise AND operator. As a result of the dynamic
processing of GI , its size grows and shrinks. Hence, the data
structure chosen to represent GI is a two-dimensional linked list.
GI has jVI j vertices. Assuming 4 bytes each for pointers and vertex
indices, the worst-case memory requirement for GI and its associated
node sets is (jVI j \Theta dn=8e+8jVI j +8jVI j 2
bytes. Since
jVI j is typically much smaller than n, the worst-case space complexity
is O(jVI j \Theta n). The worst-case size of jVI j is n fail , where
n fail is the total number of failing outputs on all failing vectors.
The reduction procedure results in jVI j being much smaller than
nfail , thereby reducing the memory requirements.
The reduction procedure computes the V 0and V 0sets based on
Corollary 2 by exploring the edges of EI . Typically, jEI j is small.
For each edge in EI the reduction procedure computes V 0and
Each intersection operation between the node sets of two vertices
of GI reduces the number of vertices of GI by 1. Thus
the maximum number of intersections possible in the procedure
reduce intersection graph() is jVI 2. Thus the worst-case
time complexity for the procedure reduce intersection graph()
is O(jVI intersections. Here again, the reduction procedure results
in small jVI j values, thereby reducing the simulation time.
4 Heuristics to Improve Resolution
When the path-tracing procedure reaches a gate with multiple controlling
inputs, one of them is chosen. The choice of input impacts
the size of the resultant node set, its elements, and hence, impacts
the diagnostic resolution. The smaller the size of the node set, the
smaller is the intersection with other node sets, and the greater is
the likelihood of reducing the intersection graph. Two conditions
are checked to select the controlling input in such a manner that
the size of the resultant node set is reduced. The first is based on
fanout. When the path-trace reaches a stem, it continues from the
stem unconditionally. When a controlling input is the branch of a
stem, one of whose other branches has been chosen, then this input
should be selected, since the stem has to be selected anyway
[10]. The second condition involves checking the controllability of
the line. SCOAP controllability measures are used. The most easily
controllable input (check for 0-controllability if the logic value
of the line is 0 and vice-versa) is likely to give the smallest node
set. If the same gate is reached in two different applications of the
path-trace and the same choice of controlling inputs exists, then selecting
different inputs for the two runs can potentially result in a
smaller intersection between the two resultant node sets. A dirty bit
is set when the path-trace chooses a controlling input. This input
is avoided in future invocations of the path-trace procedure.
Based on the above conditions, three heuristics are defined be-
low. Heuristic 1 chooses controlling input randomly. Heuristic 2
chooses controlling input by checking for fanout followed by con-
trollability. Heuristic 3 chooses controlling input by checking for
dirty bit followed by fanout and then controllability. The overall
diagnosis procedure is shown in Figure 7.
for each test vector t i
outputs
simulation with t i
for each failing output PO j
path-trace from failing output PO j
Figure
7: The Diagnosis Procedure
5 Experimental Results
The diagnosis procedure was implemented in C++. All experiments
were performed on a SUN SPARCStation 20 with 64MB
of memory for the full-scan versions of the ISCAS89 sequential
benchmark circuits [12]. In practice, the failing responses used as
input for the diagnosis procedure would be obtained by testing the
failing circuit on a tester. For our diagnosis experiments, the failing
responses were generated using the accurate bridging fault simulator
E-PROOFS [4] to ensure that the diagnostic experiments were
as realistic as possible. The cell libraries for the circuits were generated
manually [4]. The test vectors used were compact tests generated
to target stuck-at faults [13]. Ideally, diagnostic test sets for
bridging faults would be the best choice. All large ISCAS89 benchmark
circuits were considered.
For each of the benchmark circuits, a random sample of 500 single
two-line bridging faults were injected one at a time. For each one
of these faults, the failing responses were obtained by performing
bridging fault simulation on the given test set using E-PROOFS.
Faults that do not produce any failing outputs were dropped. For
the rest of the faults, the failing responses were used to perform di-
agnosis. The diagnosis results are summarized in Tables 1 and 2.
The average, minimum and maximum sizes of the candidate lists
are shown in Table 1 for the three different heuristics. The average
size of the candidate list is a few hundred faults, which is a significant
reduction from the space of all faults. Further, as expected,
heuristics 2 and 3 improve the diagnostic resolution over heuristic
1. The reduction can be significant. For example, for s38584, the
average size of the candidate list is reduced by a factor of 4. Note
that in some cases, the method uniquely identifies the fault (reso-
lution of 1). The best resolutions are indicated in bold.
The average sizes of the node sets and the intersection graph are
shown in Figure 2. As expected, heuristic 2 does the best in terms
of node set sizes. Both heuristic 2 and heuristic 3 do better than
heuristic 1 in terms of the average size of the intersection graph.
The average values of the execution time, number of failing outputs
and percentage of faults that were partially diagnosed is given
for heuristic 2. Other interesting observations can be made from
Table
2. Note that the average size of the sets is very small and
appears to be independent of the circuit size. Further, it is about
2-3 orders of magnitude smaller than the total number of lines in
the circuit, thereby suggesting that the path-trace procedure is ef-
ficient. The average size of the intersection graph (jVI j) is about
a quarter of the total number of failing outputs, indicating that the
graph reduction procedure is useful. As expected heuristic 2 does
the best in terms of the average size of the node set and heuristic 3
does the best in terms of the average size of the intersection graph
(jVI j).
Note that the procedure is accurate by construction; that is, the defect
is guaranteed to be in the candidate list. The distribution of
the sizes of the candidate lists is shown in Figure 8 for s13207 and
s38417. This trend is observed for the other circuits as well. For
about 10% of the faults, the resolution is adequate (less than 20
candidates) to consider the diagnosis complete. For about 80% of
the faults, the resolution is such (between that the
candidate list is small enough to be accurately simulated using a
bridging fault simulator as a post-processing step. In about 25%
of the cases, the diagnosis is partial; that is, only one of the lines
of the bridge can be determined with certainty. In such cases, and
if the resolution is so large that bridging fault simulation cannot be
performed, then the diagnosis procedure can be followed with other
techniques [5, 6, 7, 8] using the candidate list to improve the resolu-
tion. Note that these resolutions were obtained using a compacted
stuck-at test set. We expect that there would be better resolution
with better test sets.
Table
1: Diagnostic Resolution
Circuit Candidate List Size
Ave. Min. Max. Ave. Min. Max. Ave. Min. Max.
The diagnosis procedure requires very small execution times, as
seen in column 8 of Table 2. The procedure requires only the logic
simulation of failing vectors and the path-trace procedure from failing
outputs. Both of these procedures are linear in the size of the
circuit. Further, the graph reduction procedure is linear in the size
of its vertex set (jVI j). Techniques such as those used in [5, 6, 7, 8]
require either the storage of stuck-at fault dictionaries or the simulation
of stuck-at faults during diagnosis. As seen in columns 12
and 13 of
Table
2, the storage requirements for dictionaries can
be very large, and the simulation time is about an order of magnitude
larger than that required for the diagnosis procedure. This
is expected since fault simulation has greater than linear complexity
in the size of the circuit. Further, fault simulation without fault
dropping needs to be performed. Techniques such as those used in
[5, 6, 7] also need to enumerate bridging faults and are hence constrained
to use a small set of realistic faults. This trade-off between
resolution and complexity suggests that our diagnosis procedure,
which is both space- and time-efficient, could be attempted first,
and then be complemented by other procedures if greater resolution
is required.
Table
2: Diagnosis Results and Comparison with Techniques Using Stuck-at Fault Information
Circuit Average Size of Average Average Values Stuck-at fault
Node Set jV I j (Heuristic 2) Information
Heu.1 Heu.2 Heu.3 Heu.1 Heu.2 Heu.3 Exec. # Fail Partial # of Storage y Exec. Time z
Time
s9234f 40.2 36.3 37.0 18.7 19.3 19.3 2.32 65.4 0.35 6927 47.1 M 34.61
s13207f 29.6 28.2 27.4 62.5 58.2 27.4 31.1 298.3 0.16 12311 0.51 G 131.55
s38584f 38.9 28.7 29.4 31.3 28.7 29.4 30.4 64.9 0.38 36303 1.56 G 214.51
y Full fault dictionary in matrix format
z w/o fault dropping0.20.30.50.70.9
Candidate list size
Normalized
ratio
Figure
8: Distribution of Candidate List Size
6 Conclusions and Future Work
A deductive procedure for the diagnosis of bridging faults, which
is accurate and experimentally shown to be both space- and time-
efficient, has been described. The information obtained from a
path-trace procedure from failing outputs is combined using an intersection
graph, which is constructed and processed dynamically,
to make the deduction. The intersection graph provides an implicit
means of representing and processing the space of candidate bridging
faults without using dictionaries or explicit fault simulation.
The procedure assumes a single bridging fault between two lines. If
the defect involves multiple faults or shorts between multiple lines,
then the properties of GI may be violated. Extensions to multiple
faults or shorts between multiple lines require looking for larger
sized cliques (Kn , n - 3) in the graph GI . An interesting application
of this work is in the area of design error location. For
design errors of multiplicity 2, the diagnosis procedure can be used
without any modification. Higher multiplicity errors require extensions
--R
"Bridging and Stuck-at Faults,"
"Fault Model Evolution for Diagno- sis: Accuracy vs. Precision,"
"Biased Voting: A Method for Simulating CMOS Bridging Faults in the Presence of Variable Gate Logic Thresholds,"
"E-PROOFS: A CMOS Bridging Fault Simulator,"
"Diagnosing CMOS Bridging Faults with Stuck-at Fault Dictionaries,"
"Diagnosing of Realistic Bridging Faults with Stuck-at Information,"
"Beyond the Byzantine Gen- erals: Unexpected Behavior and Bridging Faults Diagnosis,"
"An Algorithm for Diagnosing Two-Line Bridging Faults in CMOS Combinational Circuits,"
"SCRIPT: A Critical Path Tracing Algorithm for Synchronous Sequential Circuits,"
"Why is Less Information From Logic Simulation More Useful in Fault Simulation?,"
Digital System Testing and Testable Design.
"Combinational Profiles of Sequential Benchmark Circuits,"
"Cost- Effective Generation of Minimal Test Sets for Stuck-at Faults in Combinational Logic Circuits,"
--TR
An algorithm for diagnosing two-line bridging faults in combinational circuits
Diagnosis of realistic bridging faults with single stuck-at information
E-PROOFS
Beyond the Byzantine Generals
Biased Voting
--CTR
Srikanth Venkataraman , Scott Brady Drummonds, Poirot: Applications of a Logic Fault Diagnosis Tool, IEEE Design & Test, v.18 n.1, p.19-30, January 2001
Yu-Shen Yang , Andreas Veneris , Paul Thadikaran , Srikanth Venkataraman, Extraction Error Modeling and Automated Model Debugging in High-Performance Low Power Custom Designs, Proceedings of the conference on Design, Automation and Test in Europe, p.996-1001, March 07-11, 2005
Andreas Veneris , Jiang Brandon Liu, Incremental Design Debugging in a Logic Synthesis Environment, Journal of Electronic Testing: Theory and Applications, v.21 n.5, p.485-494, October 2005 | bridging faults;deduction;diagnosis |
266569 | A SAT-based implication engine for efficient ATPG, equivalence checking, and optimization of netlists. | The paper presents a flexible and efficient approach to evaluating implications as well as deriving indirect implications in logic circuits. Evaluation and derivation of implications are essential in ATPG, equivalence checking, and netlist optimization. Contrary to other methods, the approach is based on a graph model of a circuit's clause description called implication graph. It combines both the flexibility of SAT-based techniques and high efficiency of structure based methods. As the proposed algorithms operate only on the implication graph, they are independent of the chosen logic. Evaluation of implications and computation of indirect implications are performed by simple and efficient graph algorithms. Experimental results for various applications relying on implication demonstrate the effectiveness of the approach. | Introduction
Recently, substantial progress has been achieved in the fields of
Boolean equivalence checking and optimization of netlists. Techniques
for deriving indirect implications, which were originally developed
for ATPG tools, play a key role in this development.
Indirect implications have been successfully applied in algorithms
for optimizing netlists. For this task, either a set of permissible
transformations is derived [1, 2, 3] or promising transformations
are applied and their permissibility is later verified by an
ATPG tool [4, 5, 6]. Furthermore, they are of great importance in
ATPG-based approaches to Boolean equivalence checking of both
combinational and sequential circuits [7, 8, 9, 10, 11] as they help
identify equivalent internal signals in the circuits to be compared.
In the late 1980s, Schulz et al. incorporated computation of indirect
implications into the ATPG tool SOCRATES[12]. Indirect
implications are indispensable when dealing with redundant faults
as they help to efficiently prune the search space of the branch-
and-bound search. In order to derive more indirect implications,
the originally static technique of SOCRATES, which the authors
refer to as (static) learning, has been extended to dynamic learning
[13, 14].
Recursive learning [7], proposed by Kunz et al. in 1992, was the
first complete algorithm for determining indirect implications. As
the problem of finding all indirect implications is NP-complete,
only small depths of recursion are feasible. Recently, it has
been shown that recursive learning can be adequately modelled
by AND-OR reasoning graphs [3]. Another complete method for
deriving indirect implications based on BDDs was suggested by
Mukherjee et al. [15]. Very recently, Zhao et al. presented an approach
that combines iterated static learning with recursive learning
constrained to recursion level one [16]. It is based on set algebra
and is similar to single pass deductive fault simulation.
Contrary to the above methods, which work on the structural
description of a circuit, other approaches use a Boolean satisfiability
based model. The SAT-model allows an elegant
problem formulation which can easily be adapted to various log-
ics. This abstraction, however, often impedes development of
efficient algorithms as structural information is lost. Larrabee
included a clause based formulation of Schulz's algorithm into
NEMESIS[17]. Her approach has been improved by the iterated
method of TEGUS [18]. The transitive closure algorithms suggested
by Chakradhar et al. rely on a relational model of binary
clauses [19]. Silva et al. proposed another form of dynamic learning
in GRASP [20] where indirect implications are determined by
a conflict analysis during the backtracking phase of a SAT-solver.
In many areas of logic synthesis and formal verification Binary
Decision Diagrams (BDD) have become the most widely used
data structure as they provide many advantageous properties, e.g.
canonicity and high flexibility. Besides their exponential memory
complexity, when used for ATPG, equivalence checking, and optimization
of large netlists, BDDs suffer from the drawback that
implications cannot be derived efficiently on this data structure.
For a given signal assignment it can only be decided if another signal
assignment is implied or not. So, finding all possible implications
from a given signal assignment is expensive because theoretically
all possible combinations of signal pairs have to be checked.
Therefore, BDD-based approachessuch as functional learning [15]
restrict their search to potential learning areas, which are identified
by non BDD-based implication. Consequently, structural or hybrid
approaches, i.e. BDDs combined with other methods, are predominant
in ATPG, equivalence checking and optimization of netlists.
Even though most of these approachesmake heavy use of implica-
tions, the data structures that are used for deriving and evaluating
implications are often suboptimal and inflexible. That is why we
propose a flexible data structure which is specifically optimized
with respect to implication.
In this paper, we introduce a framework for implication based
algorithms which inherits the advantages of structural as well as
SAT-based approaches. Our approach combines both the flexibility
and elegance of a SAT-based algorithm and the efficiency of
a structural method by working on a graph model of the clause
system, called implication graph. Its memory complexity is only
linear in the number of modules in the circuit. Due to structural
information available in the graph, fundamental problems such as
justification, propagation and particularly implication are carried
out efficiently on the graph. The search for indirect implications
reduces to graph algorithms that can be executed very fast and
are easily extended to exploit bit-parallelism. As the implication
graph can automatically be generated for any arbitrary logic, all
presented algorithms remain valid independent of the chosen logic.
This allows rapid prototyping of implication based tools for new
multi-valued logics.
The remainder of this paper is organized as follows. In Sec. 2,
we show how to derive the implication graph. Next, we discuss
how implications are evaluated and how indirect implications can
be computed in Sec. 3 and 4, respectively. In order to demonstrate
the high efficiency of our approach, experimental results for
various applications using the proposed implication engine are presented
in Sec. 5. Sec. 6 concludes the paper.
Implication graph
As performing implications is one of the most prominent and
time consuming tasks in ATPG, equivalence checking, and optimization
of netlists, it is of utmost importance to use a data structure
that is best suited. Unlike other graphical representations of
clause systems, our data structure represents all information contained
in both the structural netlist and the clause database. The implication
graphs used in NEMESIS[17] and TRAN[19] model only
binary clauses, clauses of a higher order are solely included in the
clause database.
Since our approach is generic in nature, any combinational circuit
can automatically be compiled into its implication graph rep-
resentation. Only information about a logic and its encoding as
well as the truth table descriptions of supported module types have
to be provided. The basic steps of compilation are given in Fig. 1.
First, all supported module types are individually compiled into
encoded
table clauses
module database
circuit
implication
subgraph
logic
encoding
module
optimization
implication graph
Figure
1: Deriving the implication graph
encoded truth tables. Then, these tables are optimized by a two-level
logic optimizer, e.g. ESPRESSO. This step is explained in
Sec. 2.1. Next, a set of clauses is extracted from the optimized ta-
ble, which is shown in Sec. 2.2. As shown in Sec. 2.3, the set of
clauses is transformed into an implication subgraph that is stored in
the module database. Then, for every module in the circuit the appropriate
generic subgraph is taken from the module database and
personalized with the input and output signals of the given module.
Finally, all identical nodes are merged into a single node resulting
in the complete implication graph.
The following sections only consider the 3-valued logic L
f0;1;Xg in order to present the basic ideas of our approach. Generation
of an implication graph for an arbitrary multi-valued logic,
e.g. the 10-valued logic L 10 known from robust path delay ATPG,
is discussed in [21].
2.1 Encoding
A signal variable x 2L 3 requires two encoding bits c x and c
x for
its internal representation. The complete scheme of encoding for
L 3 is shown in Table 1. In order to easily detect inconsistencies,
encoding interpretation
x
signal x is 1
signal x is unknown
conflict at signal x
Table
1: 3-valued logic and its encoding
conflicting signal assignments are denoted by c
property is expressed in the following definition:
DEFINITION 1 An assignment is called non-conflicting iff c x -
c
holds for all signal variables x.
Based on this encoding, the truth tables of all supported module
types are converted into encoded tables. For example, the truth
table of a 2-input AND-gate found in Table 2 is
converted into the encoded table of Table 2. This encoded table can
truth table
a b c
encoded table
c a c
a c b c
c
optimized table
c a c
a c b c
c
Table
2: AND-gate: truth table - encoded table - optimized
table
be interpreted as specifying the on-set as well as the off-set of two
Boolean functions c c and c
c . Conflicting assignmentsbelong to the
don't-care-set, as they are explicitly checked for by the implication
engine. Exploiting these don't-cares, functions c c and c
c in the
encoded table are optimized by ESPRESSO.
2.2 Clause description
The characteristic function describing the AND-gate with respect
to the given encoding can easily be given in its Conjunctive
Normal Form (CNF) by analyzing the individual rows of the optimized
table of Table 2. Every row in this table corresponds to a
clause contained in the CNF. Here, the CNF comprises the three
clauses :c
a - c
c , :c
c , and :c a -:c b - c c . That is, all valid
value assignments to the inputs and outputs of the AND-gate are
implicitly given by the non-conflicting satisfying assignments to
the characteristic equation:
CNF , (:c
a -c
2.3 Building the implication graph
By exploiting the following equivalencies the clause description
of Eq. (1) is converted into the corresponding implication graph.
x-y
It is sufficient to provide equivalencies for binary and ternary
clauses only, as any clause system of a higher order can be decomposed
into a system of binary and ternary clauses [21]. Having
transformed all clauses into binary and ternary clauses, the sub-graphs
shown in Fig. 2 are used for representation of these clauses.
These graphs contain two types of nodes. While the first type rep-
:y
:y
y
z
x
Figure
2: Implication subgraph for a binary and a ternary clause
resents the encoded signal values, the second one symbolizes the
conjunction operation. The latter type is depicted by - or a shaded
triangle. Every ternary clause has three associated -nodes that
uniquely represent the ternary clause in the implication graph.
Coming back to the 2-input AND-gate, its CNF-description is
transformed into the implication graph shown in Fig. 3. Every bit
of the encoding for a signal x is represented by a corresponding
node in the implication graph, e.g. node c a (c
a ) in Fig. 3 gives bit
c a (c
a ) of signal a. As we require non-conflicting assignments, literals
x ) can be replaced by c
x only nodes corresponding
to non-negated encoding bits are contained in Fig. 3.
So far, the implication graph only captures the logic functionality
of a circuit. Since structural information is indispensable for
some tasks, such as justification and propagation, we provide this
information within the implication graph by marking its edges with
three different tags f , b, and o. Edges that denote an implication
from an input to an output signal of a module are marked with f
(forward edge). Relations from output to input signals are tagged
with b (backward edge). All other edges, e.g. input to input relations
and indirect implications, are given tag . The
tags for the 2-input AND-gate are found in Fig. 3. By means of
these tags, a directed acyclic graph (DAG) can be extracted from
the implication graph. If all edges but the forward edges are re-
moved, we obtain a DAG that forms the base of an efficient algorithm
for backtracing and justification.
For a simple circuit, the three different circuit descriptions introduced
above are presented in Ex. 2.1. Please observe that most
clause based approaches work on a CNF in L 2 . Our approach operates
on a CNF of variables encoded with respect to a given logic,
here L 3 .
denoting other edges have been omitted in later examples.
c
c
c
c
a
c c
c a
f
f
f
f
Figure
3: Implication graph for 2-input AND-gate
2.4 Advantages
Using the proposed implication graph as a core data structure in
CAD algorithms has many advantages.
(1) Important tasks such as implication and justification can be
carried out on the implication graph in the same manner for any
arbitrary logic. The peculiarities of the chosen logic are included
in the graph. Implication and derivation of indirect implications
reduce to efficient graph algorithms as will be shown in Sec. 3.3
and 4.4.
(2) Most SAT-based algorithms use a static order for variable assignments
during their search for a satisfying assignment [17, 19].
Furthermore, these algorithms assign values to internal signals during
justification. Since PODEM, it has been well known that assigning
values only to primary input signals helps to reduce the
search space. Obviously, primary inputs are a special property of
the given instance of SAT which is not exploited by algorithms for
solving arbitrary SAT problems. The algorithm of TEGUS tries to
mimic PODEM by ordering the clauses in a special manner [18].
Our approach does not need such techniques, as structural information
is provided by edge tags.
(3) Algorithms working on the implication graph can easily exploit
bit-parallelism as the status of every node can be represented
by one bit only. For example, on a 64-bit machine 64 value assignments
can be processed in parallel, making bit-parallel implication
very efficient.
Sequential circuits are often modelled as an iterative logic array
(ILA). In this model the time domain is unfolded into multiple
copies of the combinational logic block. These logic blocks
can be compiled into the corresponding implication graphs. Using
bit-parallel techniques, a 64-bit machine allows to keep 64 time-frames
without increasing the size of the implication graph.
3 How to perform implications
3.1 Structure based
Structure based implication is a special form of event-driven
simulation. Contrary to ordinary simulation, which starts at the
primary inputs, implication is started at an arbitrary signal in the
circuit. Therefore, it has to proceed towards the primary outputs
as well as the primary inputs such that implications are often categorized
into forward and backward implications. Obviously, this
technique requires many table lookups for evaluating the module
functions. This becomes particularly costly for multi-valued log-
ics, e.g. the ones used in path delay ATPG.
3.2 Clause based
Clause based implication relies on Boolean Constraint Propagation
(BCP). BCP corresponds to an iterative application of the
Example 2.1 Circuit descriptions: structural - clauses - implication
graph
a
c
d
e
f
ffl CNF for
(:c
d -c
e -c
(:c a -c d
a -:c
c -c
ffl Implication graph for
f
f
f
f
f
f
f
f
f
f
f
f
f
c d c e
c
f
c
c
d
c
e
c
a
c
c
c c
c a c b
unit clause rule proposed by Davis et al. in 1960 [22]. In BCP,
unary clauses are used to simplify other clauses until no further
simplification is possible or some clause becomes unsatisfied. Implication
is started by adding a unary clause, which represents the
initial signal assignment, to the CNF. All unary clauses computed
by BCP correspond to implications from the initial assignment as
they force the corresponding signals to a certain logic value. The
most time consuming task in BCP is the search for clauses that can
be simplified by the unit clause rule. This search is not necessary
when working on the implication graph since clauses that share
common variables are connected in the graph.
3.3 Implication graph based
Implication graph based implication is simple and efficient, as it
only requires a partial traversal of the implication graph. Implying
from a signal assignment means that first the corresponding nodes
are marked in the implication graph. Then, the implication procedure
traverses the implication graph obeying the following rule:
Starting from an initial set S I of marked nodes, all successor
nodes s j are marked
ffl if node s j is a -node and all its predecessors are marked.
ffl if node s j represents an encoding bit and at least one predecessor
is marked .
This rule is applied until no further propagation of marks is possible
All nodes that have been marked represent signal values that can
be implied from the initial assignment given by S I . Conflicting signal
assignments are easily detected during implication, since they
cause both nodes c x and c
x to be marked.
Let us use the circuit of Ex. 2.1 for the sake of explanation. Assigning
logical value 0 to signal e corresponds to marking node
c
e in the implication graph. After running the implication proce-
dure, the following nodes are marked: c
c and c
f . To finally
obtain the implied signal values with respect to the given logic, the
marked nodes are decoded according to the given encoding, i.e. we
determine
Deriving indirect implications
Contrary to direct implications, detection of indirect implications
requires a special analysis of the logic function of a circuit as
they represent information on the circuit that is not obvious from
its description. Most methods for computation of indirect implications
are subject to order dependency. That is, some indirect implications
can only be found if certain other indirect implications
have already been discovered. In order to avoid this problem, it has
been suggested to iterate their computation [18].
4.1 Structure based
The SOCRATES algorithm [12] was the first to introduce computation
of indirect implications using the following tautologies:
While Eq. (4) (law of contraposition) may generate a candidate for
an indirect implication, Eq. (5) identifies a fix value.
Indirect implications are primarily computed in a pre-processing
phase. The idea is to temporarily set a given signal to a certain
logic value. Then, all possible direct implications from this signal
assignment are computed. For all implied signal values, it is
checked if the contrapositive cannot be deduced by direct implications
(learning criterion). In this case, the contrapositive is an indirect
implication. As indirect implications cannot be represented
within the data structure used to describe the circuit, structural algorithms
have to store them in an external data structure. This adds
additional complexity to structure based algorithms.
4.2 Clause based
Clause based computation [17, 18] is similar to the structural algorithm
of Sec. 4.1. Each free literal a contained in the CNF is
temporarily set to 1. Then BCP is used to derive all possible direct
implications, i.e. unary clauses. For all generated unary clauses
b, it is checked if the contrapositive :b ! :a is an indirect im-
plication. In this case, the corresponding clause b -:a is added
to the clause database. Thereby, indirect implications enrich the
data structure used for representing the circuit functionality. Once
an indirect implication has been added to the clause database, it
does no longer require any special attention. This is one important
advantage of clause based algorithms over structure based approaches
[18].
4.3 AND-OR enumeration
A different approach, known as recursive learning, has been
taken by Kunz et al. [3, 7]. Indirect implications are deduced by an
search [23] for all possible implications resulting from
a signal assignment. This search is performed by recursively injecting
and reversing signal assignments, which correspond to the
different possibilities for justifying a gate, followed by deriving all
direct implications. Signal values that are common to all justifications
of a gate yield indirect implications. Only a simple structural
algorithm for executing implications is applied.
Let us illustrate the principles of the AND-OR enumeration with
the circuit of Ex. 2.1 and the AND-OR tree found in Fig. 4. The
level 2
level 0
initial assignment f
level 1
Figure
4: AND-OR enumeration
root node of the AND-OR tree reflects the initial assignment, it
is of the AND-type 2 . In our example, a logical 0 is assigned to
signal f . As no further signal values can be implied, OR-node
is the only successor of the root node. The justifications for
0g. In order to derive an
indirect implication, we have to search for implied signal values
that are common to both justifications. Here, implied for
both justifications. This is represented by a new OR-node
level 0 of the AND-OR tree. In general, new OR-nodes in level 0
correspond to indirect implications. Further examination of gates
in level 2, which have become unjustified because of setting b to 0,
does not yield additional indirect implications.
4.4 Implication graph based
An implication graph based method for computing indirect implications
inherits all advantages of clause based techniques but
eliminates the costly search process required during BCP-based
implication. Moreover, our approach integrates computation of indirect
implications based on the law of contraposition and AND-OR
enumeration into the same framework.
In general, an AND-node (marked by an arc) represents a signal assignment
due to justification of an unjustified gate, whereas an OR-node
denotes a signal value that can be implied from a chosen justification. Justified
gates correspond to OR-leaves and unjustified gates to internal OR-nodes
in the AND-OR graph [3].
4.4.1 Reconvergence analysis
The basic idea of determining indirect implications by a search
for reconvergencies is shown in Fig. 5. While implication c a ! c b
c
a
indirect
direct
c
c
x
c x
c
a
c a -
Figure
5: Learning by contraposition on the implication graph
is deduced by direct implication, c
a forms an indirect impli-
cation. The -node can only be passed if both of its predecessors
are marked, i.e. it forms a reconvergent -node during implication.
If we start implication at node c
b , however, we cannot pass the -
node, as its other predecessor c x is not marked. Applying the law
of contraposition to c a ! c b , we deduce c
a such that c
a is
implied from c
b .
This observation is expressed in the following lemma:
assignment. A reconvergent
structure
c y ) in the implication graph yields an indirect implication
c
x only if
ffl c x is a fanout node in the implication graph.
ffl a node c y is marked via a -node and both predecessors
of the -node have been marked by implying along disjoint
paths in the implication graph. (Proof: [21])
Using Lemma 1 it can be shown that the search for reconver-
gencies in the implication graph detects all indirect implications,
which are found by clause and structural based approaches.
implications found by BCP on the (en-
coded) clause description can be identified by a search for the reconvergent
structures defined in Lemma 1. (Proof: [21])
We explain the reconvergence analysis with the implication
graph of Ex. 2.1. Let's assume that fanout node c b is marked.
Then, the implication procedure of Sec. 3.3 is invoked. As both
c d and c e have been marked, the succeeding -node and c f are
marked, too. The -node has been reached via two disjoint paths
in the graph (indicated by the dashed and solid line, respectively)
such that the contrapositive c
b forms an indirect implication.
This indirect implication is included into the graph in form of the
grey edge leading from node c
f to node c
b .
Applying our graph analysis offers the following advantages:
(1) The search for reconvergence regions in the implication graph
reduces the set of candidate signals that may yield an indirect im-
plication. Clause based methods have to temporarily assign a value
to all literals contained in the CNF.
(2) Reconvergence analysis is carried out very fast by an adapted
version of the algorithm presented in [24].
(3) Our method does not require a learning criterion such as the
approach of [12].
4.4.2 Extended reconvergence analysis
Contrary to the reconvergence analysis of Sec. 4.4.1, the extended
reconvergence analysis detects conditional reconvergencies
at signal nodes. As it corresponds to an AND-OR search in the
implication graph, we need the following definitions:
clause C= c 1 -c 2 -c n is called unjustified
do not evaluate to 1 and at least one
complement :c i of a literal c i is 1.
Unjustified ternary clauses are found in the implication graph without
effort. They are represented by -nodes that have exactly one
of their two predecessors marked.
unspecified literals in a
that is unjustified, and let V 1
denote the assigned values. Then, the set of non-conflicting assignments
is called a justification
of clause C, if the value assignments in J makeC evaluate to 1.
In a clause based framework a complete set of justifications J c for
an unjustified clause C is easily given by J
1gg. For our approach, set J c is even simpler, as only
ternary clauses can be unjustified. 3 Therefore, J c always consists
of exactly two justifications.
We will now explain how these two justifications can be derived
in the implication graph with Fig. 6. The given ternary clause
c y c x
c
z
c
x
c z
c
y
Figure
Unjustified ternary clause c x -c y -c z due to assignment
c
z is unjustified due to an assignment of c
is indicated by the two -nodes that have exactly one predecessor
Here, the ternary clause can be justified by setting c z
or c y to 1. If we consider that the subgraph denoting the ternary
clause c x - c y - c z is a straightforward graphical representation of
the following formulae
c
x -c
x -c
y -c
it becomes apparent that both possible justifications in J c are found
in the consequents of those implications which have the literal
making the clause unjustified, i.e. c
x , in their antecedent. These
consequents correspond to the successors of the two -nodes.
Let us now explain how the extended reconvergence analysis
corresponds to an efficient AND-OR search on the implication
graph with help of Fig. 7 showing the implication graph of Ex. 2.1.
An initial assignment of c
clause C
unjustified. Next, the possible justifications J a
are determined as the successors of the two -
nodes a 1 and a 2 belonging to clause C a . These -nodes correspond
to AND-nodes J a 1 and J a 2 in the AND-OR tree, respec-
3 If a binary clause is unjustified according to Definition 2, it reduces
to a unary clause. Unary clauses represent necessary assignments (implied
signal values) for the given signal assignment.
tively. So as to distinguish between the consequences of the two
justifications, each one is assigned a different color. Thus, node
c
marker (represented by dashed lines in
Fig. 7) and all signals that can be implied from c
are marked
green. The same is done for c
using a red marker (dotted
lines in Fig. 7). Nodes that are assigned both colors, i.e. nodes
where the markers reconverge, can be implied independent of the
chosen justification. These nodes can therefore be elevated to the
previous level in the AND-OR tree. In our example, only node
c
b is marked by both colors and we derive the indirect implication
c
b . Further analysis of unjustified clauses C b and C g in level
2 of the AND-OR tree does not yield additional indirect implications
This example indicates that the trace of the extended reconvergence
analysis is identical to the AND-OR tree generated by AND-OR
enumeration if marked -nodes are converted to AND-nodes
and marked signal nodes to OR-nodes. Obviously the extended
reconvergence analysis is capable of determining all indirect implications
given enough colors, i.e. it is complete.
An efficient procedure implementing this extended reconvergence
analysis is given in [21]. It takes advantage of the implication
graph by encoding the colors locally at the nodes using only
bit slices of a full machine word. Thus, subtrees of the AND-OR
tree are stored in parallel in different bit-levels. Additionally, a bit-parallel
version of the implication algorithm introduced in Sec. 3.3
is used. Our algorithm supports a depth of r levels in the AND-OR
tree on a 2 r -bit architecture. On a DECAlphaStation, for example,
a maximal depth of 6 levels is available.
Let us briefly summarize the advantages of our
(1) The implication graph model allows the full word size to be
exploited by means of bit-parallel techniques. The search for indirect
implications, requires efficient set operations as an OR-node
may only be elevated if it is a successor of both AND-nodes belonging
to an unjustified clause. These set operations are carried
out effectively on the implication graph by performing local bit-operations
at signal nodes such that no separate data structure is
needed. Please note, that the advantage of efficient set operations
remains, if we extend our algorithm to handle arbitrary depths of
AND-OR enumeration, which has already been done.
(2) The notion of unjustified gates necessary in [3, 7] reduces to
the simple concept of unjustified ternary clauses. Due to this concept
and the uniformity of our description, AND-OR enumeration
can easily be performed for arbitrary logics applying the same pro-
cedure. This has already been done for logic L 10 . On the contrary,
higher valued logics are complicated to deal with in the structural
approach of [7, 3].
(3) Detected indirect implications can be included into the graph
immediately, which often facilitates the computation of other indirect
implications.
(4) Some indirect implications are easily computed by the law of
contraposition while requiring a high depth of AND-OR search.
As our approach integrates both methods into one framework, indirect
implications can be identified by the best suited technique.
5 Experimental results
The implication engine, presented in this paper, has been implemented
in a C language library of functions that has been applied
successfully to several CAD problems. Please note, that some of
c d c e
c
f
c
c
d
c
e
c
a
c
c
c c
c a c b
level 2
level 1
level 0
J a
initial assignment fc
c
c
c
c
c
c
a 3 a 1
J a
a 2
Figure
7: Extended reconvergence analysis on the implication graph
the presented results have already been published in papers dealing
with application specific issues. The underlying implication
engine was not discussed. We have included these results in order
to show the efficiency of our flexible approach. While the experiments
for ATPG and netlist optimization were carried out on
a DECStation3000/600, the experiments for equivalence checking
were performed on a DECAlphaStation250 4=266 . ATPG and netlist
optimization rely on an earlier version of our implication engine,
that does not support the techniques of Sec. 4.4.2. So far, these advanced
techniques have only been used for equivalence checking.
Table
3 presents results for ATPG considering various fault models
[25, 26, 27]. Due to the flexibility of the implication graph
non-robust robust stuck-at
c5315 342117 643.4 81435 5251.8 5291 1.2
c7552 277244 1499.4 86252 5746.0 7419 5.2
Table
3: Result of test pattern generation
the various logics (L 3 required for the different fault
models could easily be handled. Table 3 gives the number of tested
faults and CPU time required for performing ATPG for non-robust
and robust path delay faults as well as stuck-at faults in combinational
circuits (or sequential circuits with enhanced scan design).
The excellent quality of the achieved results can be seen from fur-
circuit # gates # literals delay time
before after before after before after [s]
c1908 488 402 933 803 41.2 33.9 1364
red.: 8.1% 3.2% 18.8% -
Table
4: Results of delay optimization
ther tables in [25, 26, 27] where an extensive comparison to other
state-of-the-art tools is made.
Results for optimization of mapped netlists with respect to delay
are provided in Table 4. The basic idea and the approach, that
applies our implication engine to verify the permissibility of circuit
transformations, is described in [6]. The number of gates, literals,
and the circuit delay before and after optimization, as well as the
required CPU time are given.
Results for equivalence checking of netlists are presented in Table
5. It lists the total time required for equivalence checking, i.e.
circuit time[s] level max
total indirect implications
c432 1.3 1.2 1
Table
5: Results for verifying against redundancy free circuits
ATPG plus computation of indirect implications, and the time consumed
by the latter in columns 2 and 3, respectively. The maximal
depth of AND-OR search necessary for successful verification is
also given in column 4. We provide these early results in order
to show that our implication engine forms a suitable data structure
for building an efficient equivalence checker. Our straightforward
approach adopts the basic idea of the well-known equivalence
checker HANNIBAL [28] but does not include its advanced heuris-
tics, e.g. observability implications and heuristics for candidate se-
lection. Nevertheless, the results shown in Table 5 are comparable
to the ones reported in [28]. This indicates that our implication
engine is well suited for equivalence checking. Please note, that it
is easily incorporated into state-of-the-art implication based or hy-
brid, i.e. BDDs combined with implications, equivalence checkers
such that these approaches can benefit, too.
6 Conclusion
In this paper we have proposed an efficient implication engine
working on a flexible data structure called implication graph. It
has been shown that indirect implications can be effectively computed
by analysis of the graph. Experimental results confirm the
efficiency and flexibility of our approach.
In the future, our preliminary equivalence checker will be extended
by deriving observability implications directly on the implication
graph. Furthermore, we will investigate how a hybrid
technique using BDDs and the implication graph can be advantageous
for equivalence checking.
Acknowledgements
The authors are very grateful to Prof. Kurt J. Antreich for many
valuable discussions and his advice. They like to thank Bernhard
Rohfleisch and Hannes Wittmann for using the implication engine
in the netlist optimization tool and developing the path delay ATPG
tool, respectively.
--R
"Multi-level logic optimization by implication analysis,"
"LOT: Logic optimization with testability - new transformations using recursive learning,"
"And/or reasoning graphs for determining prime implicants in multi-level combinational networks,"
"Combinational and sequential logic optimization by redundancy addition and removal,"
"Perturb and simplify: Multi-level boolean network optimizer,"
"Logic clause analysis for delay optimization,"
"Recursive learning; a new implication technique for efficient solutions to cad problems - test, veri- fication, and optimization,"
"Advanced verification techniques based on learning,"
"A novel framework for logic verification in a synthesis environment,"
"Verilat: Verification using logic augmentation and transformations,"
"Aquila: An equivalence verifier for large sequential circuits,"
"Socrates: A highly efficient automatic test pattern generation system,"
"Improved deterministic test pattern generation with applications to redundancy identification,"
"Accelerated dynamic learning for test pattern generation in combinational circuits,"
"Functional learning: A new approach to learning in digital circuits,"
"Static logic implication with application to redundancy identification,"
"Test pattern generation using boolean satisfiability,"
"Com- binational test generation using satisfiability,"
"A transitive closure algorithm for test generation,"
"Grasp - a new search algorithm for satisfiability,"
"A sat-based implication engine,"
"A computing procedure for quantification theory,"
"A method of fault simulation based on stem regions,"
"A formal non-heuristic atpg approach,"
"Bit parallel test pattern generation for path delay faults,"
"Path delay atpg for standard scan designs,"
"Hannibal: An efficent tool for logic verification based on recursive learning,"
--TR
Artificial intelligence
Perturb and simplify
Multi-level logic optimization by implication analysis
Advanced verification techniques based on learning
Logic clause analysis for delay optimization
Path delay ATPG for standard scan design
A formal non-heuristic ATPG approach
VERILAT
GRASPMYAMPERSANDmdash;a new search algorithm for satisfiability
A Computing Procedure for Quantification Theory
Bit parallel test pattern generation for path delay faults
Static logic implication with application to redundancy identification
--CTR
Joo Marques-Silva , Lus Guerra e Silva, Solving Satisfiability in Combinational Circuits, IEEE Design & Test, v.20 n.04, p.16-21, January
F. Lu , M. K. Iyer , G. Parthasarathy , L.-C. Wang , K.-T. Cheng , K. C. Chen, An Efficient Sequential SAT Solver With Improved Search Strategies, Proceedings of the conference on Design, Automation and Test in Europe, p.1102-1107, March 07-11, 2005
Alexander Smith , Andreas Veneris , Anastasios Viglas, Design diagnosis using Boolean satisfiability, Proceedings of the 2004 conference on Asia South Pacific design automation: electronic design and solution fair, p.218-223, January 27-30, 2004, Yokohama, Japan
Paul Tafertshofer , Andreas Ganz, SAT based ATPG using fast justification and propagation in the implication graph, Proceedings of the 1999 IEEE/ACM international conference on Computer-aided design, p.139-146, November 07-11, 1999, San Jose, California, United States
Sean Safarpour , Andreas Veneris , Rolf Drechsler , Joanne Lee, Managing Don't Cares in Boolean Satisfiability, Proceedings of the conference on Design, automation and test in Europe, p.10260, February 16-20, 2004
Ilia Polian , Bernd Becker, Multiple Scan Chain Design for Two-Pattern Testing, Journal of Electronic Testing: Theory and Applications, v.19 n.1, p.37-48, February
Christoph Scholl , Bernd Becker, Checking equivalence for partial implementations, Proceedings of the 38th conference on Design automation, p.238-243, June 2001, Las Vegas, Nevada, United States
Ilia Polian , Hideo Fujiwara, Functional constraints vs. test compression in scan-based delay testing, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Lus Guerra e Silva , L. Miguel Silveira , Joa Marques-Silva, Algorithms for solving Boolean satisfiability in combinational circuits, Proceedings of the conference on Design, automation and test in Europe, p.107-es, January 1999, Munich, Germany
E. Goldberg , M. Prasad , R. Brayton, Using SAT for combinational equivalence checking, Proceedings of the conference on Design, automation and test in Europe, p.114-121, March 2001, Munich, Germany
Joo Marques-Silva , Thomas Glass, Combinational equivalence checking using satisfiability and recursive learning, Proceedings of the conference on Design, automation and test in Europe, p.33-es, January 1999, Munich, Germany
Ilia Polian , Hideo Fujiwara, Functional Constraints vs. Test Compression in Scan-Based Delay Testing, Journal of Electronic Testing: Theory and Applications, v.23 n.5, p.445-455, October 2007
Ilia Polian , Alejandro Czutro , Bernd Becker, Evolutionary Optimization in Code-Based Test Compression, Proceedings of the conference on Design, Automation and Test in Europe, p.1124-1129, March 07-11, 2005
Joo P. Marques-Silva , Karem A. Sakallah, Boolean satisfiability in electronic design automation, Proceedings of the 37th conference on Design automation, p.675-680, June 05-09, 2000, Los Angeles, California, United States
Ilia Polian , Bernd Becker, Scalable Delay Fault BIST for Use with Low-Cost ATE, Journal of Electronic Testing: Theory and Applications, v.20 n.2, p.181-197, April 2004 | efficient ATPG;structure based methods;SAT-based implication engine;logic circuits;implication evaluation;indirect implications;implication graph;equivalence checking;graph algorithms;netlist optimization;automatic testing;graph model;circuit clause description |
266611 | Java as a specification language for hardware-software systems. | The specification language is a critical component of the hardware-software co-design process since it is used for functional validation and as a starting point for hardware- software partitioning and co-synthesis. This paper pro poses the Java programming language as a specification language for hardware-software systems. Java has several characteristics that make it suitable for system specification. However, static control and dataflow analysis of Java programs is problematic because Java classes are dynamically linked. This paper provides a general solution to the problem of statically analyzing Java programs using a technique that pre-allocates most class instances and aggressively resolve memory aliasing using global analysis. The output of our analysis is a control dataflow graph for the input specification. Our results for sample designs show that the analysis can extract fine to coarse-grained concurrency for subsequent hardware-software partitioning and co-synthesis steps of the hardware-software co- design process to exploit. | Introduction
Hardware-software system solutions have increased in
popularity in a variety of design domains [1] because these
systems provide both high performance and flexibility.
Mixed hardware-software implementations have a number
of benefits. Hardware components provide higher performance
than can be achieved by software for certain time-critical
subsystems. Hardware also provides interfaces to
sensors and actuators that interact with the physical envi-
ronment. On the other hand, software allows the designer
to specify the system at high levels of abstraction in a flexible
environment where errors - even at late stages in the
design - can be rapidly corrected [2]. Software therefore
contributes to decreased time-to-market and decreased system
cost.
Hardware-software system design can be broken down
into the following main steps: system specification, parti-
tioning, and co-synthesis. The first step in an automatic
hardware-software co-design process is to establish a complete
system specification. This specification is used to validate
the desired behavior without considering
implementation details. Functional validation of the system
specification is critical to keep the system development
time short because functional errors are easier to fix and
less costly to handle earlier in the development process.
Given a validated system specification, the hardware-software
partitioner step divides the system into hardware,
software subsystems, and necessary interfaces by analyzing
the concurrency available in the specification. The partitioner
maps concurrent blocks into communicating
hardware and software components in order to satisfy performance
and cost constraints of the design. The final co-synthesis
step generates implementations of the different
subsystems by generating machine code for the software
subsytems and hardware configuration data for the hardware
subsystems.
The system specification is a critical step in the co-design
methodology because it drives the functional validation
step and the hardware-software partitioning process.
Thus, the choice of a specification language is important.
Functional validation entails exploration of the design
space using simulation; hence, the specification must allow
efficient execution. This requires a compile and run-time
environment that efficiently maps the specification onto
general-purpose processor platforms. On the other hand,
the partitioning process requires a precise input specification
whose concurrency can be clearly identified. Generating
a precise specification requires language constructs and
abstractions that directly correspond to characteristics of
hardware or software. Traditionally, designers have not
been able to reconcile these two objectives in one specification
language, but have instead been forced to maintain
multiple specifications. Obviously, maintaining multiple
specifications of the design is at best tedious due to the
need to keep all specifications synchronized. It is also error-prone
because different specification languages tend to
have different programming models and semantics. This
need for multiple specifications is due to shortcomings of
current specification languages used in hardware-software
co-design.
Hardware-software specification languages currently
used by system designers can be divided into software programming
languages and hardware description languages.
Software languages such as C or C++ generate high-performance
executable specifications of system behavior for
functional validation. Software languages are traditionally
based on a sequential execution model derived from the execution
semantics of general purpose processors. However,
software languages generally do not have support for modeling
concurrency or dealing with hardware issues such as
timing or events. These deficiencies can be overcome by
providing the designer with library packages that emulate
the missing features [15]. A more serious problem is that
software languages allow the use of indirect memory referencing
which is very difficult to analyze statically. This
makes it difficult for static analysis to extract implicit concurrency
within the specification. Hardware description
languages such as Verilog [5] and VHDL [6] are optimized
for specifying hardware with support for a variety of hardware
characteristics such as hierarchy, fine-grained con-
currency, and elaborate timing constructs. Esterel is
another specification language similar to Verilog with
more constructs for handling exceptions [7]. SpecCharts
builds on a graphical structural hierarchy while using
VHDL to specify the implementations of the various structures
in the hierarchy [4]. These languages do not have
high-level programming constructs, and this limits their expressiveness
and makes it difficult to specify software. Fur-
thermore, these languages are based on execution models
that require a great deal of run-time interpretation such as
event-driven semantics. This results in low-performance
execution compared to software languages.
This paper advocates the use of Java as a single specification
language for hardware-software systems by identifying
language characteristics that enable both
efficient functional validation and concurrency exploration
by the hardware-software partitioner. Java is a general-pur-
pose, concurrent, object-oriented, platform-independent
programming language [10]. Java is implementation-independent
because its run-time environment is an abstract
machine called the Java virtual machine (JVM) with its
own instruction set called bytecodes [11]. The virtual machine
uses a stack-based architecture; therefore, Java byte-codes
use an operand stack to store temporary results to be
used by later bytecodes. Java programs are set in an object-oriented
framework and consist of multiple classes, each of
which is compiled into a binary representation called the
classfile format. This representation lays out all the class
information including the class's data fields and methods
whose code segments are compiled into bytecodes. These
fields and methods can be optionally declared as static.
Static fields or methods of a class are shared by all instances
of that class while non-static fields or methods are duplicated
for each new instance. Data types in Java are either
primitive types such as integers, floats, and characters or
references (pointers) to class instances and arrays [10].
Since Java classes are predominantly linked at run-time,
references to class instances cannot be resolved at compile-
time. This presents a challenge to static analyzers in determining
data flow through data field accesses and control
flow through method calls.
This paper also outlines a control/dataflow analysis
technique that can be used as a framework for detecting
concurrency in the design. Our analysis technique provides
a general solution for the problem of dynamic class allocation
by aggressively pre-allocating most class instances at
compile-time and performing global reference analysis.
The rest of the paper is organized as follows. In Section
2 we explain why Java is well-suited for hardware-software
system specification. In Section 3 we identify the
problems that arise when analyzing Java programs and
present a general solution for building control flow and
dataflow dependence information. We apply our technique
to three sample designs and analyze both explicit and implicit
concurrency in these designs in Section 4. We conclude
and briefly discuss future directions in Section 5.
Hardware-Software Specification with
Java
It is desirable for the hardware-software co-design
process to use a single specification language for design
entry because specifications using different languages for
software and hardware combine different execution mod-
els. This makes these specifications difficult to simulate
and to analyze. Some researchers begin with a software
programming language usually C++ and extend this language
with constructs to support concurrency, timing, and
events by providing library packages or by adding new language
constructs. Examples of these approach are Scenic
[15] and V++ [17]. We take a slightly different approach.
Instead of requiring the designer to specify the hardware
implementation details in the specification, in our approach
the designer models the complete system in an algorithmic
or behavioral fashion. Software languages are well-suited
for this type of modeling. Once the specification is com-
plete, an automatic compilation process is used to analyze
the specification to identify the coarse-grained concurrency
described by the designer and uncover the finer-grained
concurrency implicit in the specification. The partitioning
and synthesis steps of the hardware-software co-design
process use the concurrency uncovered by this analysis to
create an optimized hardware-software system. The specification
language used with this approach must have the
ability to specify explicit concurrency and make it easy to
uncover the implicit concurrency.
Coarse-grained concurrency is intuitive for the designer
to specify because hardware-software systems are often
conceptualized as sets of concurrent behaviors [2]. Java is
a multi-threaded language and can readily express this sort
of concurrency. Such concurrent behaviors can be modeled
by sub-classing the Thread class and overriding its run
method to encode the thread behavior as shown in Figure 1.
The Thread class provides methods such as suspend and
resume, yield, and sleep that manipulate the thread. Syn-
chronization, however, is supported at a lower level using
monitors implemented in two bytecode operations that pro-
Read Data
loop:
Generate
x-array Proc A
x-array Process
x-array
class system {
static Thread procA, procB;
public static void main(String argv[])
{
new procAClass();
new procBClass(procA);
// Launch threads
system.class
procAClass.class
class procAClass extends Thread {
boolean
synchronized void setXArray(.)
{
. // Write x-array data
synchronized arr_t getXArray(.)
{
. // Return x-array data
public void run()
{
while (.not done.) {
while (this.xready == true)
for (int
class procBClass extends Thread {
procAClass procA;
procBClass(Thread procA_in)
{
public void run()
{
while (.not done.) {
while (this.xready == false)
. // Process data
procBClass.class
Figure
1. Concurrency in Java
vide an entry and an exit to the monitor. The sample design
shown in Figure 1 maintains synchronization when reading
and writing the x-array in methods getXArray and setXAr-
ray which are tagged as synchronized.
Fine-grained concurrency is usually either non-intuitive
or cumbersome for the designer to express in the spec-
ification. This implies that an automated co-design tool
must be able to uncover fine-grained concurrency by analyzing
the specification. The primary form of concurrency
to look for is loop-level concurrency where multiple iterations
of the same loop can be executed simultaneously.
This form of concurrency is important to detect because algorithms
generally spend most of their time within core
loops. Identifying and exploiting parallel core loops can
thus provide significant performance enhancements. Determining
whether loop iterations are parallel requires analysis
to statically determine if data dependencies exist
across these loop iterations. In the run method of procA-
Class shown in Figure 1, if the compute_xarray call does
not depend on values generated in previous iterations of the
for-loop, then all the iterations of the loop may be executed
simultaneously. The major hurdle that the data dependence
analysis must overcome is dealing with memory references
because these references introduce a level of indirection in
reading and writing physical memory locations. Compile-time
analysis has to be conservative in handling such refer-
ences. This conservatism is necessary to guarantee correct
system behavior across transformations introduced by the
partitioning step based on the results of the analysis. How-
ever, this conservatism causes the analysis to generate false
data dependences which are nonexistent at the system lev-
el. These dependences reduce the data parallelism that the
analysis detects. In the simple design shown in Figure 1,
without the ability to analyze dependences within the for-loop
and across the associated method call, conservative
analysis would determine that the loop iterations are inter-dependent
and hence can only be performed sequentially
reducing the degree of data parallelism in that section of the
specification by 100-fold. The advantage that Java has over
a language like C++ is that Java restricts the programmer's
use of memory references. In Java, memory references are
strongly typed. Also, references are strictly treated as object
handles and not as memory addresses. Consequently,
pointer arithmetic is disallowed. This restrictive use of references
enables more aggressive analysis to reduce false
data dependences.
A co-design specification language should provide
high-performance execution to enable rapid functional val-
idation. Java's execution environment uses a virtual ma-
chine. The JVM provides platform-independence;
however, this independence requires Java code to be executed
by an interpreter which reduces execution performance
compared to an identical specification modeled in
C++. Although this performance degradation is at least an
order of magnitude for Sun's JDK 1.0 run-time environ-
ment, techniques such as just-in-time compilation are closing
the performance gap to less than two-times that of
C++[12][16]. This evolution in Java tools and technology
has been and will be driven by Java's success in other do-
mains, especially network-based applications. Moreover,
the Java run-time environment makes it easy to instrument
and gather profiling information which can be used to
guide hardware-software partitioning.
Analyzing Java Programs
Control and dataflow analysis of the Java specification
is required for partitioning and co-synthesis steps of the co-design
process. This analysis examines the bytecodes of invoked
methods to determine their relative ordering and
data dependencies. These bytecodes have operand and result
types that are either primitive types, or classes and ar-
rays. While primitive types are always handled by value,
class and array variables are handled by reference. These
object (class instance) references are pointers; however,
they are well-behaved compared to their C/C++ counter-parts
because these references are strongly typed and cannot
be manipulated.
Object references point to class instances that are
linked dynamically during run-time. So, prior to executing
the Java program, we can only allocate the static fields and
methods of the program's classes. This makes it difficult to
statically analyze Java programs because if object references
cannot be resolved, calls to methods of these dynamically
linked objects cannot be resolved either. This makes it
impossible to determine control flow. The only way to deal
with this problem is to conservatively assign the method invocation
to software so that the software run-time system
can handle the dynamic resolution. However, this reduces
the opportunities for extracting parallelism in hardware and
thus leads to inferior hardware-software design.
In order to avoid the problem with dynamically linked
objects, the specification could be restricted to use only
static fields and methods or be forced to allocate all necessary
objects linearly at the beginning of the program. How-
ever, this would significantly restrict the use of the
language. Our solution is to attempt to pre-allocate objects
during static analysis. It should be noted that this approach
does not handle class instantiations within loops or recursive
method invocations.
Pre-allocation only partially solves the problem with
dynamically allocated class instances. A class reference
can point to any instance of compatible class type; there-
fore, two references of compatible class types can alias.
Conservative handling of reference aliasing reduces the apparent
concurrency in the specification. More aggressive
reference aliasing analysis requires global dataflow analysis
to determine a class instance or set of instances that a
reference may point to.
An outline of our analysis technique is shown in
Figure
2. The analysis starts with the static main method.
For each method processed, local analysis is performed to
determine local control and dataflow. Next, all methods invoked
by the current method are recursively analyzed. Fi-
nally, reference point-to values are resolved in order to
determine global data dependence information. Before
elaborating on the techniques used to perform the local and
global analyses, we describe the target representation of the
CDFG.
The CDFG representation shown in Figure 3 involves
two main structures. The first structure is a table of static
and pre-allocated class instances. Aside from object accounting
information, this table maintains a list of entries
per object; each entry represents either a method or a non-
Figure
2. Analysis technique outline
ProcessMethod (current_method) {
Perform local analysis on current_method to build local control
flow information and resolve local dependencies.
Pre-allocate new instantiations if not inside loops or recursion.
For each method invoked {
ProcessMethod (invoked_method)
Resolve reference global analysis impacted by invoked_method
Resolve global dependencies given complete reference analysis
ProcessMethod (main)
Figure
3. Target representation
Method Call
Graph
Basic Block
Control Flow
Pre-allocated Entities
Static Entities
Graph
primitive type data field. The data field entry is necessary
for global analysis because data fields have a global scope
during the life of their instances. Arrays are treated exactly
as class instances. In fact, arrays are modeled as classes
with no methods. The method entries point to portions of
the second main structure in the representation. The second
structure is the control dataflow information. Its nodes are
bytecode basic blocks. The edges represent local control
flow between basic blocks within a methods as well as global
control flow across method invocations and returns.
The CDFG representation models multi-threading and
exceptions using special control flow edges that annotate
information about the thread operation performed or the
exception trapped. Thread operations in Java are implemented
in methods of the Thread class. The CDFG abstracts
invocations of these methods by encoding the
associated operation in the control flow edge corresponding
to the method call. For example, Java threads are initiated
by invoking Thread class start method. When the
CDFG encounters an invocation of the start method, a new
control flow edge is inserted between the invocation and
the start of the thread's run method. This edge also indicates
that a new thread is being forked. On the other hand,
Exceptions in Java use try-catch blocks where the code
which may cause an exception is placed inside the try
clause followed by one or more subsequent catch clauses.
Catch blocks trap on a specified thrown exception and execute
the corresponding handler code. The CDFG inserts
special control flow edges between the block that may
cause the exception and the handler block. These edges are
annotated with the type of exception the handler is trap-
ping. An example of how exceptions are handled in shown
in
Figure
4.
3.1
This step targets a particular method, identifying and
sequencing its basic blocks to capture the local control
flow. It also resolves local dependencies at two distinct lev-
els. First, since Java bytecodes rely on an operand stack for
Arithmetic
Handler
Code
Exception
Figure
4. Exception edges in CDFG
try {
int
catch
(ArithmeticException e) {
// handler code.
the intermediate results, the extra level of dependency indirection
through the stack needs to be factored out. This is
achieved using bytecode numbering. Second, dependencies
through local method variables are identified using reaching
definition dataflow analysis.
flow is represented by
the method's basic blocks and the corresponding sequenc-
ing. Basic blocks are sequences of bytecodes such that only
the first bytecode can be directly reached from outside the
block and if the first bytecode is executed, then all the byte-codes
are sequentially executed. The control flow edges
simply represent the predecessor-successor ordering of all
the basic blocks.
Analysis. Dependencies exist between
bytecodes through an extra level of indirection - the operand
stack. We resolve this indirection by using "bytecode
numbering." Bytecode numbering simply denotes the replacement
of the stack semantics of each bytecode analyzed
with physical operands that point to the bytecode that
generated the required result. This is simply achieved by
traversing the method's bytecodes in program order. Instead
of executing the bytecode, its stack behavior is simulated
using a compile-time operand stack, OpStack. If the
bytecode reads data off the stack, entries are popped off
OpStack, and new operands are created with the values retrieved
from the stack. If the bytecode writes a result to the
stack, a pointer to it is pushed onto OpStack. This process
has to account for data that requires more than one stack entry
such as double precision floating point and long integer
results. Also, stack-manipulating bytecodes such as dup
(duplicate top entry) or swap (swap top two entries) are interpreted
by manipulating OpStack accordingly. Then,
these bytecodes are discarded since they are no longer
needed for the purposes of code functionality. An outline
and an example of bytecode numbering are shown in
Figure
5.
Data dependencies across local variables are resolved
by computing the reaching definitions for the particular
method. A definition of a variable is a bytecode that may
assign a value to that variable. A definition d reaches some
point p if there exists a path from the position of d to p such
that no other definition that overwrites d is encountered.
Once all the reaching definitions are computed, it would be
clear that there exists a data dependency between bytecode
m and bytecode n if m defines a local variable used by n and
m's definition reaches the point immediately following n.
Computing the reaching definitions uses the iterative
dataflow Worklist algorithm [8]. This algorithm iterates
over all the basic blocks. A particular basic block propagates
definitions it does not overwrite. At a join point of
multiple control branches, the set of reaching definitions is
the union of the individual sets. The algorithm iterates over
the set of successors of all basic blocks whose output set of
reaching definitions changes and converges when no more
changes in these sets of reaching definitions materialize.
3.2 Global Analysis
To handle data dependencies between references, global
analysis generates for each reference the set of object
instances to which it may point out of the set of pre-allocated
instances. Once this points-to relation is determined,
simple dataflow analysis techniques such as global reaching
definition can compute dataflow dependencies between
these references.
A straightforward solution is to examine the entire
control flow graph while treating method invocations as
regular control flow edges. Then, iterative dataflow analysis
can generate the points-to information for every refer-
ence. However, this approach suffers from the problem of
unrealizable paths which cause global aliasing information
to propagate from one invocation site to a non-corresponding
return site [9].
Figure
5. Bytecode numbering example179iload_1
ldc_w #4
ireturn
ireturn
Initialize symbolic operand stack, OpStack, to empty.
Traverse basic blocks in reverse postorder.
If current bytecode reads data from the stack,
pop OpStack into the appropriate bytecode
operand slot.
If current bytecode writes data to the stack,
push the bytecode's PC unto OpStack.
push local variable onto operand stack
push constant onto operand stack
pop two entries, if first < second -> jump to PC= 9
pop entry and return from method with entry as return value
pop entry and return from method with entry as return value
OpStack Status
Current Bytecode
9 iconst_m1
8 ireturn
A more context-sensitive solution motivated by [9] is
to generate a transfer function for each method to summarize
the impact of invoking that method on globally accessible
data and references. The variables that this transfer
function maps are the formal method parameters that are
references. In addition, this set of variables is extended to
include global references used inside the method through
(1) creating new instances, (2) invoking other methods that
return object references, or (3) accessing class instance
fields that are references. Input to this transfer function is
the initial points-to values of the extended parameters set.
Output generated by this transfer function is the final
points-to values of the extended parameters due to the
method invocation.
This transfer function is a summary of the accesses
(reads and writes) of the method's extended parameters
generated using interval analysis [8]. These accesses are
ordered according to the method's local control flow infor-
mation. Accesses can be one of the following five primitive
operations: read, assign, new, meet and invoke. The read
primitive requires one operand which is a reference; the result
is the set of potential class instances to which the reference
points. The assign primitive is used to summarize an
assignment whose left-hand side is an extended parameter.
It requires two operands the first of which is the target ref-
erence. The second is a set of potential point-to instances.
The new primitive indicates the creation of a new class in-
stance. This primitive returns a set composed of a single in-
stance, if pre-allocation is possible (not within loop or
recursion). Otherwise, it conservatively points-to the set of
compatible class instances. The meet primitive is necessary
to handle joining branches in the control flow. At a meet
point, the alias set of some reference assigned in one or
more of the meeting branches is the union of the alias sets
for that reference from each of the meeting control flow
edges. Finally, the invoke primitive is used to resolve
change in reference alias sets due to invoking some meth-
od. Effectively, this primitive causes the transfer function
of the invoked method to be executed.
some_method
(obj a, obj b)
{
new obj();
if (test1)
Figure
6. Transfer functions for global reference
analysis
TF{some_method}:
assign ( a , new ( obj
4 Experimental Results
Hardware-software systems are multi-process sys-
tems, so partitioning and co-synthesis tools which map behavioral
specifications to these systems need to make
hardware-software trade-offs [13]. To make these trade-offs
with the objective of maximizing the cost-performance
of the mixed implementation, it is necessary to be able to
identify the concurrency in the input specification. We
have implemented our Java front-end analysis step as a
stand-alone compilation pass that reads the design's class
files and generates a corresponding CDFG representation.
We tested our technique using the designs listed in Table 1.
The first design, raytracer, is a simple graphical appli-
cation. It renders two spheres on top of a plane with shadows
and reflections due to a single, specular light source.
The second application, robotarm, is a robot arm control-
ler. The third design, decoder, is a digital signal processing
application featuring a video decoder for H.263-encoded
bitstreams [14].
The resulting control-dataflow graphs were analyzed
to identify concurrency in the specification. The analysis
examined concurrency at three levels: thread-level, loop-
level, and bytecode-level. Thread-level concurrency is exhibited
as communicating, concurrent processes which can
span the control flow of several methods. Loop-level concurrency
is exhibited by core loops usually confined to a
single method. Bytecode-level concurrency is exhibited by
bytecode operations that can proceed provided their data
dependencies are satisfied irrespective of a control flow or-
dering. This form of concurrency exists within basic
blocks.
Thread-level concurrency is explicitly expressed by
the designer through Java threads. Since threads are
uniquely identified in the CDFG, no work is required to uncover
this form of parallelism. Loop-level concurrency requires
analysis of control and dataflow information
associated with inner loops to identify data dependencies
spanning different loop iterations and determine if these are
true dependencies, that is, dependencies between a write in
some iteration of the loop and a read in a subsequent itera-
tion. So, loops with independent iterations can execute
these iterations concurrently as mini-threads. The coarse-grained
concurrency expressed at the thread or loop level
Lines of Java Classes Instances Basic Blocks
raytracer 698 6 37 358
Table
1: Design characteristics
can be exploited by allocating these threads to different
subsystems in our target architecture.
On the other hand, bytecode-level concurrency in the
CDFG does not span multiple basic blocks; it exists at the
bytecode level within each basic block. Its degree depends
on the basic block's ``inter-bytecode'' data dependencies.
This fine-grained concurrency impacts the performance
improvement of a hardware implementation of the basic
block. Hardware is inherently parallel; therefore, parallelism
in the design is implemented without any cost overhead
given enough structural resources to support the
parallelism. The only limitation on the degree of parallelism
is synchronization due to data dependencies. Hence the
execution time of some block in hardware decreases with
increased data parallelism.
Table
2 presents the results of analyzing the three different
forms of parallelism in our sample designs . The first
column indicates the number of designer-specified threads.
The second column shows the number of parallelizable
loops while the third column indicates the average number
of bytecodes per loop. The fourth column shows the average
number of bytecodes per basic block while the fifth
column assesses the average data parallelism in these basic
blocks. This bytecode-level concurrency is measured as the
average number of bytecodes that can execute simultaneously
during a cycle of the JVM. These results show that
it is possible to extract parallelism at various levels of granularity
for Java programs.
5 Conclusions and Future Work
The specification language is the starting point of the
hardware-software co-design process. We have described
key requirements of such a language. A specification language
should be expressive so that design concepts can be
easily modeled but should provide a representation that is
relatively easy to analyze and optimize for performance.
The language should also provide high-performance exe-
cution. We have shown that the Java programming language
satisfies these requirements.
Thread-level
Concurrency
Loop-level
Concurrency
Bytecode-level
Concurrency
Number of
threads
Number
of loops
Avg.
bytecodes
per loop
Avg.
basic block
Avg.
bytecode
parallelism
decoder 3 28 27 7.1 2.5
Table
2: Parallelism assessment results
To be able to partition and eventually co-synthesize input
Java specifications, we must be able to analyze the
specification. However, a major problem facing this analysis
step in Java are dynamic links to class instances. To
make static analysis possible, we proposed a technique that
relies on aggressive reference analysis to resolve ambiguity
in global control and dataflow. This technique generates a
control dataflow graph representation for the specification.
Our results show that using this technique it is possible to
extract concurrency which can be exploited from the Java
specification.
In the future, our analysis technique will serve as a
front-end to a co-design tool which maps the Java system
specification to a target architecture composed of one or
more microprocessors tightly coupled to programmable
hardware resources.
Acknowledgments
This work was sponsored by ARPA under grant no.
MIP DABT 63-95-C-0049.
--R
"Hardware-Software Co- Design,"
"Specification and Design of Embedded Hardware-Software Systems,"
Specification and Design of Embedded Systems.
The Verilog Hardware Description Language.
IEEE Inc.
"The Esterel Synchronous Programming Language: Design, Semantics, Implementation,"
Compilers Principles
"Efficient Context-Sensitive Pointer Analysis for C Programs,"
The Java Language Specification.
The Java Virtual Machine Specification.
"Java Performance Advancing Rapidly,"
"Multiple-Process Behavioral Synthesis for Mixed Hardware-Software Systems,"
Enhanced H.
"An Efficient Implementation of Reactivity for Modeling Hardware in Scenic Design Environment,"
"Compiling Java Just in Time,"
"The V++ Systems Design Language,"
--TR
Compilers: principles, techniques, and tools
The ESTEREL synchronous programming language
Specification and design of embedded systems
Efficient context-sensitive pointer analysis for C programs
Multiple-process behavioral synthesis for mixed hardware-software systems
An efficient implementation of reactivity for modeling hardware in the scenic design environment
The Verilog hardware description language (4th ed.)
Java Virtual Machine Specification
The Java Language Specification
Specification and Design of Embedded Hardware-Software Systems
Compiling Java Just in Time
--CTR
Tommy Kuhn , Wolfgang Rosenstiel , Udo Kebschull, Description and simulation of hardware/software systems with Java, Proceedings of the 36th ACM/IEEE conference on Design automation, p.790-793, June 21-25, 1999, New Orleans, Louisiana, United States
Tommy Kuhn , Wolfgang Rosenstiel, Java based object oriented hardware specification and synthesis, Proceedings of the 2000 conference on Asia South Pacific design automation, p.579-582, January 2000, Yokohama, Japan
T. Kuhn , T. Oppold , C. Schulz-Key , M. Winterholer , W. Rosenstiel , M. Edwards , Y. Kashai, Object oriented hardware synthesis and verification, Proceedings of the 14th international symposium on Systems synthesis, September 30-October 03, 2001, Montral, P.Q., Canada
Josef Fleischmann , Klaus Buchenrieder , Rainer Kress, Java driven codesign and prototyping of networked embedded systems, Proceedings of the 36th ACM/IEEE conference on Design automation, p.794-797, June 21-25, 1999, New Orleans, Louisiana, United States
Peter L. Flake , Simon J. Davidmann, Superlog, a unified design language for system-on-chip, Proceedings of the 2000 conference on Asia South Pacific design automation, p.583-586, January 2000, Yokohama, Japan
Josef Fleischmann , Klaus Buchenrieder , Rainer Kress, A hardware/software prototyping environment for dynamically reconfigurable embedded systems, Proceedings of the 6th international workshop on Hardware/software codesign, p.105-109, March 15-18, 1998, Seattle, Washington, United States
A. Fin , F. Fummi, Protected IP-core test generation, Proceedings of the 12th ACM Great Lakes symposium on VLSI, April 18-19, 2002, New York, New York, USA
T. Kuhn , T. Oppold , M. Winterholer , W. Rosenstiel , Marc Edwards , Yaron Kashai, A framework for object oriented hardware specification, verification, and synthesis, Proceedings of the 38th conference on Design automation, p.413-418, June 2001, Las Vegas, Nevada, United States
Martin Radetzki , Ansgar Stammermann , Wolfram Putzke-Rming , Wolfgang Nebel, Data type analysis for hardware synthesis from object-oriented models, Proceedings of the conference on Design, automation and test in Europe, p.101-es, January 1999, Munich, Germany
Alessandro Fin , Franco Fummi, A Web-CAD methodology for IP-core analysis and simulation, Proceedings of the 37th conference on Design automation, p.597-600, June 05-09, 2000, Los Angeles, California, United States
J. Zhu, Static memory allocation by pointer analysis and coloring, Proceedings of the conference on Design, automation and test in Europe, p.785-790, March 2001, Munich, Germany
Rachid Helaihel , Kunle Olukotun, JMTP: an architecture for exploiting concurrency in embedded Java applications with real-time considerations, Proceedings of the 1999 IEEE/ACM international conference on Computer-aided design, p.551-557, November 07-11, 1999, San Jose, California, United States
Cindy Eisner , Irit Shitsevalov , Russ Hoover , Wayne Nation , Kyle Nelson , Ken Valk, A methodology for formal design of hardware control with application to cache coherence protocols, Proceedings of the 37th conference on Design automation, p.724-729, June 05-09, 2000, Los Angeles, California, United States
Jianwen Zhu , Daniel D. Gajski, OpenJ: an extensible system level design language, Proceedings of the conference on Design, automation and test in Europe, p.99-es, January 1999, Munich, Germany
James Shin Young , Josh MacDonald , Michael Shilman , Abdallah Tabbara , Paul Hilfinger , A. Richard Newton, Design and specification of embedded systems in Java using successive, formal refinement, Proceedings of the 35th annual conference on Design automation, p.70-75, June 15-19, 1998, San Francisco, California, United States
C. Schulz-Key , M. Winterholer , T. Schweizer , T. Kuhn , W. Rosenstiel, Object-oriented modeling and synthesis of SystemC specifications, Proceedings of the 2004 conference on Asia South Pacific design automation: electronic design and solution fair, p.238-243, January 27-30, 2004, Yokohama, Japan
Srgio Akira Ito , Luigi Carro , Ricardo Pezzuol Jacobi, System design based on single language and single-chip Java ASIP microcontroller, Proceedings of the conference on Design, automation and test in Europe, p.703-709, March 27-30, 2000, Paris, France
Marcello Dalpasso , Alessandro Bogliolo , Luca Benini, Virtual Simulation of Distributed IP-Based Designs, IEEE Design & Test, v.19 n.5, p.92-104, September 2002
Verkest , Joachim Kunkel , Frank Schirrmeister, System level design using C++, Proceedings of the conference on Design, automation and test in Europe, p.74-83, March 27-30, 2000, Paris, France
Marcello Dalpasso , Alessandro Bogliolo , Luca Benini, Specification and validation of disstributed IP-based designs with JavaCAD, Proceedings of the conference on Design, automation and test in Europe, p.132-es, January 1999, Munich, Germany
Axel Jantsch , Per Bjurus, Composite signal flow: a computational model combining events, sampled streams, and vectors, Proceedings of the conference on Design, automation and test in Europe, p.154-160, March 27-30, 2000, Paris, France
Brian Grattan , Greg Stitt , Frank Vahid, Codesign-extended applications, Proceedings of the tenth international symposium on Hardware/software codesign, May 06-08, 2002, Estes Park, Colorado
Marcello Dalpasso , Alessandro Bogliolo , Luca Benini, Virtual simulation of distributed IP-based designs, Proceedings of the 36th ACM/IEEE conference on Design automation, p.50-55, June 21-25, 1999, New Orleans, Louisiana, United States
Malay Haldar , Anshuman Nayak , Alok Choudhary , Prith Banerjee, A system for synthesizing optimized FPGA hardware from MATLAB, Proceedings of the 2001 IEEE/ACM international conference on Computer-aided design, November 04-08, 2001, San Jose, California
A. Nayak , M. Haldar , A. Choudhary , P. Banerjee, Precision and error analysis of MATLAB applications during automated hardware synthesis for FPGAs, Proceedings of the conference on Design, automation and test in Europe, p.722-728, March 2001, Munich, Germany
Annette Bunker , Ganesh Gopalakrishnan , Sally A. Mckee, Formal hardware specification languages for protocol compliance verification, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.1, p.1-32, January 2004 | specification languages;hardware-software co-design |
266615 | Delay bounded buffered tree construction for timing driven floorplanning. | As devices and lines shrink into the deep submicron range, the propagation delay of signals can be effectively improved by repowering the signals using intermediate buffers placed within the routing trees. Almost no existing timing driven floorplanning and placement approaches consider the option of buffer insertion. As such, they may exclude solutions, particularly early in the design process, with smaller overall area and better routability. In this paper, we propose a new methodology in which buffered trees are used to estimate wire delay during floorplanning. Instead of treating delay as one of the objectives, as done by the majority of previous work, we formulate the problem in terms of Delay Bounded Buffered Trees (DBB-tree) and propose an efficient algorithm to construct a DBB spanning tree for use during floorplanning. Experimental results show that the algorithm is very effective. Using buffer insertion at the floorplanning stage yields significantly better solutions in terms of both chip area and total wire length. | Introduction
In high speed design, long on-chip interconnects can be modeled as distributed delay lines,
where the delay of the lines can often be reduced by wire sizing or intermediate buffer
insertion. Simple wire sizing is one degree of freedom available to the designer, but often it
is ineffective due to area, routability, and capacitance considerations. On the other hand,
driver sizing and buffer insertion are powerful tools for reducing delay, given reasonable
power constraints. Intermediate buffers can effectively decouple a large load off of a critical
path or divide a long wire into smaller segments, each of which has less line resistance
and makes the path delay more linear with overall length. As the devices and lines shrink
into deep submicron, it is more effective, in terms of power, area, and routability, to insert
intermediate buffers than to rely solely on wire sizing.
Because floorplanning and placement have a significant impact on critical path delay,
research in the area has focused on timing driven approaches. Almost no existing floorplanning
and placement techniques consider the option of buffer insertion, particularly early
in the design cycle. Typically, only wire length or Elmore delay is used for delay calcula-
tion. This practice is too restrictive as evidenced by the reliance industry has placed on
intermediate buffering as a means for achieving aggressive cycle times. It is commonplace
for production chips to contain tens of thousands of buffers. This paper attempts to leverage
the additional freedom gained by inserting buffers during floorplanning and placement.
The resulting formulation provides an additional degree of freedom not present in past
approaches and typically leads to solutions with smaller area and increased routability.
To incorporate buffer insertion into early planning stage, we propose a new methodology
of floorplanning and placement using buffered trees to estimate the wiring delay. We
formulate the Delay Bounded Buffered Tree (DBB-tree) problem as follows: Given a net
with delay bounds on the critical sinks that are associated with critical paths, construct
a tree with intermediate buffers inserted to minimize both the total wiring length and the
number of buffers, while satisfying the delay bounds. We propose an efficient algorithm
based on the Elmore delay model to construct DBB spanning trees for use during floorplanning
and placement. The experimental results of the DBB spanning tree show that using
buffer insertion at the floorplanning stage yields significantly better solutions in terms of
both chip area and total wiring length.
The remainder of the paper is organized as follows. Section 2 reviews the related works
on interconnect optimization and intermediate buffer insertion, and introduces the idea of
our DBB spanning tree algorithm. Section 3 describes the DBB algorithm in detail. The
experimental results of DBB spanning tree algorithm applied for signal nets and for general
floorplanning are given in Section 4, followed by conclusions in Section 5.
Related Works and Overview of DBB-tree Algorithm
2.1 Elmore Delay Model
As VLSI design reaches deep submicron, interconnect delay models have evolved from the
simplistic lumped RC model to the sophisticated high-order moment-matching delay model
[1]. The Elmore delay model [2] provides a simple closed-form expression with greatly
improved accuracy for delay compared to the lumped RC model. Elmore is the most
commonly used delay model in recent works of interconnect design.
For each wire segment modeled as a - \Gammatype circuit, given the interconnect tree T , the
Elmore delay from the source s 0 to sink s i can be expressed as follows:
rl u;v ( cl u;v
where R 0 is the driver resistance at the source and C 0 is the total capacitance charged by the
driver. Path(0; i) denotes the path from s 0 to s i and wire e(u; v) connecting s v to its parent
s u . Given a uniform wire width, r and c denote the unit resistance and unit capacitance
respectively. The wire resistance rl u;v and wire capacitance cl u;v are proportional to the
wire length l u;v . Let C v denote the total capacitance of a subtree rooted at s v , which is
charged through wire e(u; v). The first term of -(0; i) is linear with the total wire length
of T , while the second term has quadratic dependence on the length of the path from the
source to s i .
2.2 Topology Optimization for Interconnect
From the previous discussion of Elmore delay, we can conclude that for interconnect topology
optimization, two major concerns are the total wire length and the path length from the
driver to the critical sinks. The early work of Cohoon Randall [3] and Cong et al. [4]
observed the existence of conflicting min-cost and min-radius (the longest source-to-sink
path length of the tree) objectives in performance-driven routing [5].
A number of algorithms have been proposed to make the trade-offs between the total
wiring length and the radius of the Steiner or spanning tree [6, 7, 8, 9]. Cong et al.
2.3 Buffered Tree Construction 3
proposed the "Bounded Radius, Bounded Cost" (BRBC) spanning tree algorithm which
uses the shallow-light approach. BRBC constructs a routing tree with total wire length no
greater than (1 times that of a minimum spanning tree and radius no greater than
times that of a shortest path tree where ffl - 0. Alpert et al. [10] proposed AHHK
trees as a direct trade-off between Prim's MST algorithm and Dijkstra's shortest path tree
algorithm. They used a parameter 0 - c - 1 to adjust the preference between tree length
and path length.
For deep submicron design, path length is no longer an accurate estimate of path delay.
Several attempts have been made to directly optimize Elmore delay taking into account
different loading capacitances of the sinks. With exponential timing complexity, the branch
and the bound algorithms proposed by Boese et al. [11, 12] provide the optimal and
near-optimal solutions that minimize the delay from the source to an identified critical
sink or a set of critical sinks of Steiner tree. For a set of critical sinks, it minimizes a
linear combination of the sink delays. However it is very difficult to choose the proper
weights, or the criticality, for this linear combination. Hong et al. [13] proposed a modified
Dreyfus-Wagner Steiner tree algorithm for minimizing the maximal source-to-sink delay,
The maximal source-to-sink delay is not necessarily interesting when the corresponding
sink is off the critical path. Also, there may be more than one critical sink in the same
net associated with multiple critical paths. Prasitjutrakul and Kubitz [14] proposed an
algorithm for maximizing the minimal delay slack, where the delay slack is defined as the
difference between the real delay and the given delay bound at a sink.
2.3 Buffered Tree Construction
Intermediate buffer insertion creates another degree of freedom for interconnect optimiza-
tion. Early works on fanout optimization problem focused on the construction of buffered
trees during logic synthesis [15, 16, 17] without taking into account the wiring effect. Re-
cently, layout driven fanout optimization have been proposed [18, 19]. For a given Steiner
tree, a polynomial time dynamic programming algorithm was proposed in [20] for the delay-optimal
buffer insertion problem. Using dynamic programming, Lillis et al. [21] integrated
wire sizing and power minimization with the tree construction under a more accurate delay
model taking signal slew into account. Inspired by the same dynamic programming
algorithm, Okamoto and Cong [22] proposed a simultaneous Steiner tree construction and
buffer insertion algorithm. Later the work was extended to include wire sizing [23]. In the
formulation of the problem [22, 23], the main objective is to maximize the required arrival
time at the root of the tree, which is defined as the minimum among the differences between
the arrival time of the sinks and the delay from the root to the sinks.
To achieve optimal delay, multiple buffers may be necessary for a single edge. An early
work of S. Dhar and M. Franklin [24] developed the optimal solution for the size, number
and position of buffers driving a uniform line that minimizes the delay of the line. The work
further considered the area occupied by the buffers as a constraint. Recently C. Alpert and
A. Devgan [25] calculated the optimal number of equally spaced buffers on a uniform wire
to minimize the Elmore delay of the wire.
2.4 Delay Minimized vs. Delay Bounded
Since timing driven floorplanning and placement are usually iterated with static timing
analysis tools, the critical path information is often available and the timing requirement for
critical sinks converges as the design and layout progresses. It is sufficient to have bounded
delay rather than minimized delay. On the other hand, the minimization of total wire length
is of interest since total wire length contributes to circuit area and routing congestion. In
addition, total wire capacitance contributes a significant factor to the switching power. The
reduction of wire length reduces circuit area and improves routability, also reduces power
consumption, which are important factors for manufacturing cost and fabrication yield [1].
In this paper, instead of minimizing the source to sink delays, we will present an algorithm
that constructs buffered spanning trees to minimize the total wire length subject to timing
constraints.
Zhu [26] proposed the "Delay Bounded Minimum Steiner Tree" (DBMST) algorithm to
construct a low cost Steiner tree with bounded delay at critical sinks. The DBMST algorithm
consists of two phases: (1) initialization of Steiner tree subject to timing constraints
and (2) iterative refinement of the topology to reduce the wiring length while satisfying
the delay bounds associated with critical sinks. Since the Elmore delays at sinks are very
sensitive to topology and they have to be recomputed every time the topology is changed,
DBMST algorithm searches all possible topological updates exhaustively at each iteration
and so it is very time consuming.
2.5 Overview of DBB-tree Algorithm
In this paper, we formulate the new Delay Bounded Buffered tree (DBB-tree) problem as
follows: Given a signal net and delay bounds associated with critical sinks, construct a
routing tree with intermediate buffers inserted to minimize the total wiring length and the
number of buffers while satisfying the delay bounds. Based on Elmore delay, we develop an
efficient algorithm for DBB spanning tree construction.
The DBB-tree algorithm consists of three phases: (1) Calculate the minimum Elmore
delay for each critical sink to allow immediate exclusion of floorplanning/placement solutions
that are clearly infeasible from a timing perspective; (2) Construct a buffered spanning tree
to minimize the total wire length subject to the bounded delay; (3) Based on the topology
obtained in (2), delete unnecessary buffers without violating timing constraints to minimize
the total number of buffers. The overall time complexity of DBB-tree algorithm is O(kn 2 ),
where k is the maximum number of buffers inserted on a single edge, and n the number of
sinks in the net. Our DBB-tree algorithm makes the following three major contributions:
ffl Treating the delay bounds provided by static timing analysis tools as constraints
rather than formulating the delay into the optimization objectives.
ffl Constructing a spanning tree and placing intermediate buffers simultaneously. The
algorithm is very effective to minimize both wire length and the number of buffers.
ffl Allowing more than one buffer to be inserted on each single edge and calculating
the precise buffer positions for the optimal solution. In contrast, most previous work
assumes at most one buffer is inserted for each edge and the buffer location is fixed.
3 Description of DBB-tree Algorithm
For floorplanning purpose, we assume uniform wire width. In the DBB-tree algorithm
presented here, we consider only non-inverting buffers. However, the algorithm can be
easily extended to handle inverting buffers. Given a signal net
the source and s sinks. The geometric location for each terminal of S is determined
by floorplanning. Let ~
denote the vector describing the parameters of non-inverting
buffers, in which t b , r b and c b are the internal delay, resistance and capacitance
of each buffer respectively. Before presenting the detailed DBB-tree algorithm, we first
state some theoretical results developed by Alpert and Devgan [25] which will be used to
calculate the number and position of identical buffers placed on a single edge to minimize
the edge delay in DBB-tree algorithm:
Theorem 1 Given a uniform line e(0; i) connecting sink s i to source s 0 , and the parameter
vector ~
B, the number of buffers placed on the wire to obtain the minimum Elmore delay of
Figure
1: Given a uniform line e(0; i) connecting sink s i to source s 0 , -(0; i) buffers are
placed on the wire in such way that the wire delay is minimized: the first buffer is ff - away
from source s 0 , the distance between two adjacent buffers equals to ffi - and the last buffer
is fi - away from sink s i .
e is given by:
s
where R 0 is the driver output resistance at source s 0 and c i the loading capacitance at sink
s i . Given - buffers inserted on e(0; i), the optimal placement of buffers which obtains the
minimum wire delay is places the buffers at equal spacing from each other. Let ff - be the
distance from the source to the first buffer, ffi - the distance between two adjacent buffers,
and fi - the distance from the last buffer to sink s i . They can be derived as follows:
r
c
r
c
The minimized wire delay with - buffers is given by:
cl 0;i (R
c
r
buffers instead of - buffers are placed on wire e the wire delay will be increased by:
\Delta- (0;
3.1 Lower Bound of Elmore Delay for Critical Sinks 7
Figure
2: If we place a buffer right after s 0 as in (a), the total capacitance driven by the
driver at source is reduced to c b and the first term of -(0; i) equals to R 0 c b . The second
term, the propagation delay of the path from source to s i , can be minimized by directly
connecting s i to the source and placing -(0; i) buffers on the wire as in (b). Combining (a)
and (b), we calculate the lower bound of Elmore delay for s i .
By replacing R 0 with 0, Equations 2 - 5 can be applied to the wire connecting any two
sinks in routing tree T . Based on the theoretical results discussed above, we will present
the detailed DBB-tree algorithm in the following section.
3.1 Lower Bound of Elmore Delay for Critical Sinks
The first phase of DBB-tree algorithm calculates the lower bound of Elmore delay for each
sink s i . It may not be possible to achieve this delay simultaneously for all sinks, but no
achievable delay will exceed it. The floorplanning is timing infeasible if there exists s i in S
such that the lower bound - (0; i) is greater than the given delay bound D i
The first term in Eq.1, R 0 C 0 , can be reduced to R 0 c b by placing a buffer right after s 0 as
shown in Fig. 2 (a). And the second term, the propagation delay of the path from source
to s i , can be minimized by directly connecting source to s i and placing buffers as shown in
Fig. 2 (b). Formally, the lower bound of Elmore delay for s i can be given by:
If for all sinks in S, the lower bound of Elmore delay is less than the given delay bound,
then the algorithm continues to phases 2 and 3, otherwise the timing constraints are too
Figure
3: For particular sink s is the last buffered edge on the path
from the source to s v and the last buffer on edge through the resistance
between the buffer and s v , defined as driving resistance of T v , denoted by R(T v ). Since there
is no buffer between s u and s v , the driver of T v also drives T i for
are the intermediate sinks from s u to s v . After adding the new edge
e(v; w), the loading capacitance of T v is increased by \DeltaC v , the Elmore delay of sinks in
will be increased by R(T i )\DeltaC v . On the other hand, due to
the buffers on edge not affect on the delay of sinks which are not in T u .
Therefore the timing constraints of T will be satisfied if and only if the timing constraints
of T u are satisfied.
tight for the given floorplanning and the solution is excluded.
3.2 DBB Spanning Tree Construction
The second phase of DBB-tree algorithm constructs a buffered spanning tree to minimize
the total wire length subject to the timing constraints. Similar with Prim's MST algorithm,
it starts with the trivial tree: g. Iteratively edge e(v; w) with -(v; w) buffers is added
into T , where s are chosen such that l v;w is minimized and timing
constraints are satisfied. T grows incrementally until it spans all terminals of S, or there
is no edge e(v; w) that can be added without violating the timing constraints. In the later
case, the floorplanning is considered to be timing infeasible and the solution is excluded.
For the incremental construction of the DBB-tree, the key issue is how to quickly
evaluate the timing constraints each time a new edge is added, i.e. whether or not the
3.2 DBB Spanning Tree Construction 9
delay bound at each critical sink is satisfied. For particular edge e(v; w) where s
the number and the precise positions of buffers inserted on the edge which
minimize the edge delay can be calculated according to Equations 2 and 3. Let T v denote
the subtree rooted at s v , after adding edge e(v; w) into T , the loading capacitance of T v , is
increased by \DeltaC
cl
denote the last buffered edge on the path from the source to s v as shown in
Fig. 3, the last buffer on edge . If there is no buffer from the source
to s v , the source drives T v . According to Elmore delay, T v is driven through the resistance
between the driver and s v , defined as driving resistance of T v , denoted by R(T v ). Given
s v\Gamma1 is the parent of s v , R(T v ) can be calculated as follows:
Since there is no buffer on the path from s u to s v , the driver of T v also drives T i for
are the intermediate sinks from s u to s v as
shown in Fig. 3. Let T denote the set of sinks in subtree T i but not in T i+1 . Due
to the increased loading capacitance \DeltaC v of T v , the Elmore delay of sinks in T
On the other hand, due to the buffers on edge the increased loading capacitance
of T v will not affect on the delay of sinks which are not in T u . We define the delay slack of
a sink s 2 T as:
and the delay slack of T i to be:
the timing constraints will be satisfied for the sinks in T if and only if the following
condition holds:
By introducing the loading capacitance slack of each subtree
Eq. 12 can be rewritten as:
Let oe (v) denote the minimum slack of loading capacitance among the subtrees T i for
oe
the condition in Eq. 14 can be simply rewritten as:
oe (v) - \DeltaC
By keeping track of oe (v), this condition can be checked in constant time. The Elmore
delay of s w can be calculated from the Elmore delay of s
where - (v; w) is calculated from Eq. 4 and the timing bound at s w can also be checked in
constant time. From above analysis, we can conclude that the necessary and sufficient
condition for satisfying the timing constraints of T after adding the new edge e(v; w) is:
oe (v) - \DeltaC v and Dw -(0; w); (18)
and this condition can be checked in constant time.
At each iterative step of DBB-tree construction, s can be selected
in linear time such that l v;w is minimum and the timing constraints are satisfied. After
adding the new edge e(v; w), a two-pass traversal of T is sufficient to update the delay slack
and loading capacitance slack of each subtree in T : (1) traverse T bottom up and calculate
the delay slack and loading capacitance slack of each subtree T i according to Equations 11
and 13; (2) traverse T top down and calculate oe (i) from oe (i \Gamma 1), given s i\Gamma1 is the parent
of
oe
Since each new edge can be added into T in linear time, the overall DBB spanning tree can
be constructed in O(n 2 ) time for net S with n sinks.
3.3 Buffer Deletion
In phase 2, one or more buffers are inserted on each edge to minimize wire delay. Some
of the buffers may not be necessary for meeting the delay bound. The third phase of the
3.3 Buffer Deletion 11
Figure
4: In case of -(v; shown in (a), edge e(v; w) becomes unbuffered edge
after deleting the buffer, the load capacitance of subtree T v is increased by: \DeltaC
cl v;w buffers are re-inserted on e(v; w), as
shown
DBB-tree algorithm deletes buffers from the spanning tree obtained in the second phase to
reduce the total number of buffers. In general the buffers closest to the source can unload
the critical path the most. The algorithm traverses T bottom up and deletes one buffer at
a time without violating timing constraints. The deletion continues until all the buffers left
in T are necessary, that is, the timing constraints would not be satisfied if one more buffer
is deleted.
For particular edge e(v; w) with - ? 0 buffers, if one buffer is deleted from e(v; w), this
wire delay will be increased by \Delta- (v; w) according to Eq. 5, and buffers will be
re-inserted: In case of shown in Fig. 4 (a), wire
e(v; w) becomes unbuffered edge after deleting the buffer, the load capacitance of subtree
T v is increased by: \DeltaC cl v;w buffers are re-inserted
on edge e(v; w), as shown in Fig. 4 (b): \DeltaC
Similar to phase 2, let denote the last buffered edge from the source to s v . The
delay of the sinks in subtree T u will be increased due to the increased loading capacitance
of T v . In addition, the delay of sinks in subtree Tw will be further increased due to the
increased edge delay of e(v; w). Based on the analysis in phase 2, a buffer can be deleted
without causing timing violation if and only if following condition holds:
oe (v) - \DeltaC v and -(Tw
Table
1: Experimental Parameters of DBB-tree Algorithm on Signal Nets
Output Resistance of Driver R 0
500\Omega \Gamma1000\Omega Unit Wire Resistance c
0:12\Omega =-m
Unit Wire Capacitance r 0:15fF=-m
Output Resistance of Buffer r b
500\Omega Loading Capacitance of Buffer c b 0:05pF
Intrinsic Delay of Buffer t b 0:1ns
Loading Capacitance of Sink c i 0:05pF \Gamma 0:15pF
Therefore the timing constraints of T can be evaluated in constant time for deleting a buffer
from edge e(v; w). The buffer can be found by searching at most n \Gamma 1 edges. After deleting
a buffer, the delay slack and loading capacitance slack of subtrees in T are incrementally
updated in O(n) time as in phase 2. So one buffer will be deleted in linear time. There are at
most kn buffers in T where k is the maximum number of buffers on single edge, the timing
complexity of buffer deletion is O(kn 2 ) which dominates the overall DBB-tree algorithm.
Following experimental results show that the buffer deletion effectively minimizes the total
number of buffers and it can delete more than 90% of the buffers inserted in the previous
phase.
4 Experimental Results
In the first part of the experiments, we implemented the DBB spanning tree algorithm on
a Sun SPARC 20 workstation under the C/UNIX environment. The algorithm was tested
on signal nets with 2; 5; 10; 25; 50 and 100 pins. For each net size, 100 nets were randomly
generated on a 10mm \Theta 10mm routing region, and we report the average results. The driver
output resistance at the source and the loading capacitances of sinks are randomly chosen
from the
respectively. The parameters used in
the experiments are based on [22], which are summarized in Table 1.
The average results of the DBB spanning tree construction are shown in Table 2. The
delay bounds of critical sinks for each net size are randomly chosen from the interval titled
"Delay Bounds". The average wire length and number of buffers for DBB spanning tree are
reported in this table. The average CPU time consumed per net shows that DBB spanning
tree algorithm is fast enough that can be applied during the stochastic optimization.
Table
2: Experimental Results of DBB Spanning Trees on Signal Nets
Pins(#) Delay Bounds(ns) Wire Length(mm) Buffers(#) CPU (sec:)
To evaluate the DBB spanning trees generated by the experiments, we constructed
both minimum spanning tree (MST) and shortest path tree (SPT) for the same signal
nets using the same parameters. The comparison of the average results is shown in Table
3. "DBB/MST" and "DBB/SPT" is the average length ratio of DBB-tree to MST and
DBB-tree to SPT respectively. The column "% sinks meeting bound" gives the average
percentage of critical sinks which satisfy the delay bounds. For the nets with small number
of terminals, the length of DBB-tree is very close to MST. As the number of terminals
in the nets increases, the length of DBB-tree to MST is increased, but only 9% through
0% critical sinks can meet the bound in MST for 25-pin through 100-pin nets. It can be
concluded that it is very difficult to satisfy the timing constraints using MST especially for
the large nets. On the other hand, the length ratio of DBB-tree to SPT is decreased from
1:0 down to 0:24, and SPT is also not ideal to meet the delay bounds for the large nets.
The DBB-tree approach can achieve the short wire length with 100% critical sinks meeting
the delay bounds.
In
Table
4, the average number of buffers inserted in DBB spanning trees are listed and
the result is very reasonable considering the number of terminals in the net. To evaluate
the buffer deletion algorithm, we compare the average number of buffers inserted in DBB
spanning tree before and after buffer deletion. The percentage of buffers reduced by the
third phase of DBB-tree algorithm is as high as 79% through 93%. The results presented
in
Table
4 demonstrate that the third phase of the algorithm is quite effective at removing
any unnecessary buffers estimated during phase 2 and the DBB-tree algorithm will not lead
to unrealistic, impractical results.
In the second part of the experiments, we apply DBB-tree to evaluate the wiring delay of
floorplanning solutions considered by the Genetic Simulated Annealing method [27]. Table
Table
3: Comparison of DBB-tree, MST and SPT of Signal Nets.
Pins (#) Legnth (mm) % sinks meeting bound
DBB MST DBB/MST SPT DBB/SPT DBB MST SPT
Table
4: Average Number of Buffers Before vs. After Buffer Deletion.
Pins(#) w/o Deletion with Deletion Reduced (%)
Table
5: Four Examples of Floorplanning Applying DBB-tree Algorithm.
Blocks Block size Aspect ratio Nets Net size Delay bound CPU
(#) (mm) of blocks (#pins/net) (ns) (min:)
Table
Achieved Floorplanning Solutions by Using DBB-tree, MST and SPT Approaches.
Blocks sinks meeting bound
(#) DBB MST SPT DBB MST SPT DBB MST SPT
100 213.57 274.77 274.02 6039.93 7037.06 16339.61 100 90.82 95.61
5 presents four examples which includes 10, 25, 50 and 100 rectangular blocks, respectively.
The sizes (widths and heights) and aspect ratios of blocks are randomly chosen within a
nominal range. Netlists are also randomly generated for the four examples. The technology
parameters are consistent with those shown in Table 1.
To compare with the traditional approaches which do not consider buffer insertion during
the floorplanning, we also apply MST and SPT methods to evaluate the floorplanning
solution in the same examples. Based on the same stochastic search strategy, the floorplanning
solutions achieved by the three methods are shown in Table 6. Similarly, the column
"% sinks meeting bound" measures the percentage of critical sinks which satisfy the tim-
Table
7: The Improvement by Considering Buffer Insertion in Floorplanning Stage.
Blocks Area Improvement(%) Wire Length Improvement(%) Buffers(#)
(#) DBB vs. MST DBB vs. SPT DBB vs. MST DBB vs. SPT in DBB
Figure
5: Floorplanning of 50 blocks with 150 nets sized from 2-pin to 25-pin. SPT is
applied to evaluated the wiring delay. Achieved chip area is 124:38mm 2 and total wire
length 2696:10mm with 97:7% critical sinks meeting the delay bounds.
ing bounds. Table 7 calculates the improvement of both chip area and total wire length
by using DBB-tree method. For the examples, the area can be improved up to 31% over
MST and 22% over SPT, respectively. On the other hand, the total wire length can be
improved up to 19% over MST and 63% over SPT, respectively. This substantial improvement
demonstrates that using buffer insertion at the floorplanning stage yields significantly
better solutions in terms of both chip area and total wire length. In addition, the total
number of buffers estimated by the DBB-tree approach are also shown in this table. Figures
5 and 6 show the floorplanning solution with 50 blocks by using SPT and DBB-tree
algorithm, respectively. In addition, Fig. 6 also displays the buffers estimated by DBB-
tree approach. It should be noted that future research is needed to extend the approach
to distribute buffers into the empty space between macros subject to timing constraints.
However, the area of such buffers is typically a small fraction of a given macro area and can
be typically accommodated.
5 Conclusion
In this paper, we propose a new methodology of floorplanning and placement where intermediate
buffer insertion is used as another degree of freedom in the delay calculation. An
efficient algorithm to construct Delay Bounded Buffered(DBB) spanning trees has been de-
veloped. One of the key reasons this approach is effective is that we treat the delay bounds
as constraints rather than formulating the delay into the optimization objectives as is done
Figure
Floorplanning of the same example in Fig. 5. DBB-tree is applied to evaluate the
wiring delay. Achieved chip area is 112:59mm 2 and total wire length 1455:47mm with 100%
critical sinks meeting the delay bounds. The area and total wire length are improved by
9:48% and 46:02% respectively. The dots shown in the figure represent the buffers estimated
by DBB-tree.
in most of the previous work. In fact, our problem formulation is more realistic for the
path based timing driven layout design. The timing constraints of a floorplan are evaluated
many times during our stochastic optimization process. The efficient DBB spanning tree
algorithm made our buffered tree based floorplanning and placement highly effective and
practically applicable to industrial problems.
--R
"The transient response of damped linear networks with particular regard to wide-band amplifiers,"
"Critical net routing,"
"A new class of iterative steiner tree heuristics with good perfor- mance,"
"A direct combination of the prim and dijkstra constructions for improved performance-driven global routing,"
"Performance-Driven interconnect design based on distributed RC delay model,"
"Performance oriented rectilinear steiner trees,"
"Bounded-diameter spanning tree and related problems,"
"Prim-Dijkstra tradeoffs for improved performance-Driven routing tree design,"
"Rectilinear steiner trees with minimum elmore delay,"
"High-Performance routing trees with identified critical sinks,"
"Performance-Driven steiner tree algorithms for global routing,"
"A timing-Driven global router for custom chip design,"
"A heuristic algorithm for the fanout problem,"
"Performance oriented technology mapping,"
"The fanout problem: From theory to practice,"
"A methodology and algorithms for post-Placement delay optimization,"
"Routability-Driven fanout optimization,"
"Buffer placement in distributed RC-tree networks for minimal elmore delay,"
"Optimal and efficient buffer insertion and wire sizing,"
"Interconnect layout optimization by simultaneous steiner tree construction and buffer insertion,"
"Buffered steiner tree construction with wire sizing for interconnect layout optimization,"
"Optimum buffer circuits for driving long uniform lines,"
"Wire segmenting for improved buffer insertion,"
Chip and Package Co-Synthesis of Clock Networks
"Genetic simulated annealing and application to non-slicing floorplan design,"
--TR
Bounded diameter minimum spanning trees and related problems
The fanout problem: from theory to practice
Performance-oriented technology mapping
A heuristic algorithm for the fanout problem
Performance oriented rectilinear Steiner trees
Performance-driven Steiner tree algorithm for global routing
High-performance routing trees with identified critical sinks
Routability-driven fanout optimization
Performance-driven interconnect design based on distributed RC delay model
A methodology and algorithms for post-placement delay optimization
Rectilinear Steiner trees with minimum Elmore delay
Buffered Steiner tree construction with wire sizing for interconnect layout optimization
Wire segmenting for improved buffer insertion
Performance-Driven Global Routing for Cell Based ICs
Critical Net Routing
Chip and package cosynthesis of clock networks
--CTR
Weiping Shi , Zhuo Li, An O(nlogn) time algorithm for optimal buffer insertion, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Yuantao Peng , Xun Liu, Power macromodeling of global interconnects considering practical repeater insertion, Proceedings of the 14th ACM Great Lakes symposium on VLSI, April 26-28, 2004, Boston, MA, USA
Xun Liu , Yuantao Peng , Marios C. Papaefthymiou, Practical repeater insertion for low power: what repeater library do we need?, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Ruiming Chen , Hai Zhou, Efficient algorithms for buffer insertion in general circuits based on network flow, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.322-326, November 06-10, 2005, San Jose, CA
Charles J. Alpert , Anirudh Devgan , Stephen T. Quay, Buffer insertion for noise and delay optimization, Proceedings of the 35th annual conference on Design automation, p.362-367, June 15-19, 1998, San Francisco, California, United States
I-Min Liu , Adnan Aziz , D. F. Wong, Meeting delay constraints in DSM by minimal repeater insertion, Proceedings of the conference on Design, automation and test in Europe, p.436-440, March 27-30, 2000, Paris, France
Hur , Ashok Jagannathan , John Lillis, Timing driven maze routing, Proceedings of the 1999 international symposium on Physical design, p.208-213, April 12-14, 1999, Monterey, California, United States
Jason Cong , Tianming Kong , David Zhigang Pan, Buffer block planning for interconnect-driven floorplanning, Proceedings of the 1999 IEEE/ACM international conference on Computer-aided design, p.358-363, November 07-11, 1999, San Jose, California, United States
Probir Sarkar , Vivek Sundararaman , Cheng-Kok Koh, Routability-driven repeater block planning for interconnect-centric floorplanning, Proceedings of the 2000 international symposium on Physical design, p.186-191, May 2000, San Diego, California, United States
Jason Cong , Tianming Kong , Zhigang (David) Pan, Buffer block planning for interconnect planning and prediction, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.9 n.6, p.929-937, 12/1/2001
Feodor F. Dragan , Andrew B. Kahng , Ion Mndoiu , Sudhakar Muddu , Alexander Zelikovsky, Provably good global buffering using an available buffer block plan, Proceedings of the 2000 IEEE/ACM international conference on Computer-aided design, November 05-09, 2000, San Jose, California
Ali Selamat , Sigeru Omatu, Web page feature selection and classification using neural networks, Information SciencesInformatics and Computer Science: An International Journal, v.158 n.1, p.69-88, January 2004
Dian Zhou , Rui-Ming Li, Design and verification of high-speed VLSI physical design, Journal of Computer Science and Technology, v.20 n.2, p.147-165, March 2005 | Total Wire Length;MST;floorplanning;DBB-tree;SPT;elmore delay;buffer insertion;delay bounds |
266802 | Path-based next trace prediction. | The trace cache has been proposed as a mechanism for providing increased fetch bandwidth by allowing the processor to fetch across multiple branches in a single cycle. But to date predicting multiple branches per cycle has meant paying a penalty in prediction accuracy. We propose a next trace predictor that treats the traces as basic units and explicitly predicts sequences of traces. The predictor collects histories of trace sequences (paths) and makes predictions based on these histories. The basic predictor is enhanced to a hybrid configuration that reduces performance losses due to cold starts and aliasing in the prediction table. The Return History Stack is introduced to increase predictor performance by saving path history information across procedure call/returns. Overall, the predictor yields about a 26% reduction in misprediction rates when compared with the most aggressive previously proposed, multiple-branch-prediction methods. | Introduction
Current superscalar processors fetch and issue four to
six instructions per cycle - about the same number as in
an average basic block for integer programs. It is obvious
that as designers reach for higher levels of instruction
level parallelism, it will become necessary to fetch more
than one basic block per cycle. In recent years, there have
been several proposals put forward for doing so [3,4,12].
One of the more promising is the trace cache [9,10],
where dynamic sequences of instructions, containing
embedded predicted branches, are assembled as a
sequential "trace" and are saved in a special cache to be
fetched as a unit.
Trace cache operation can best be understood via an
example. Figure 1 shows a program's control flow graph
(CFG), where each node is a basic block, and the arcs
represent potential transfers of control. In the figure, arcs
corresponding to branches are labeled to indicate taken
(T) and not taken (N) paths. The sequence ABD
represents one possible trace which holds the instructions
from the basic blocks A, B, and D. This would be the
sequence of instructions beginning with basic block A
where the next two branches are not taken and taken,
respectively. These basic blocks are not contiguous in the
original program, but would be stored as a contiguous
block in the trace cache. A number of traces can extracted
from the CFG - four possible traces are:
1: ABD
2: ACD
3: EFG
4: EG
Of course, many other traces could also be chosen for the
same CFG, and, in fact, a trace does not necessarily have
to begin or end at a basic block boundary, which further
increases the possibilities. Also, note that in a trace
cache, the same instructions may appear in more than one
trace. For example, the blocks A, D, E, and G each
appear twice in the above list of traces. However, the
mechanism that builds traces should use some heuristic to
reduce the amount of redundancy in the trace cache;
beginning and ending on basic block boundaries is a good
heuristic for doing this.
A
F
G
Figure
Associated with the trace cache is a trace fetch unit,
which fetches a trace from the cache each cycle. To do
this in a timely fashion, it is necessary to predict what the
next trace will be. A straightforward method, and the one
used in [9,10], is to predict simultaneously the multiple
branches within a trace. Then, armed with the last PC of
the preceding trace and the multiple predictions, the fetch
unit can access the next trace. In our example, if trace 1 -
ABD - is the most recently fetched trace, and a multiple
branch predictor predicts that the next three branch
outcomes will be T,T,N, then the next trace will implicitly
be ACD.
In this paper, we take a different approach to next
trace prediction - we treat the traces as basic units and
explicitly predict sequences of traces. For example,
referring to the above list of traces, if the most recent trace
is trace 1, then a next trace predictor might explicitly
output "trace 2." The individual branch predictions
T,T,N, are implicit.
We propose and study next trace predictors that
collect histories of trace sequences and make predictions
based on these histories. This is similar to conditional
branch prediction where predictions are made using
histories of branch outcomes. However, each trace
typically has more than two successors, and often has
many more. Consequently, the next trace predictor keeps
track of sequences of trace identifiers, each identifier
containing multiple bits. We propose a basic predictor
and then add enhancements to reduce performance losses
due to cold starts, procedure call/returns, and interference
due to aliasing in the prediction table. The proposed
predictor yields substantial performance improvement
over the previously proposed, multiple-branch-prediction
methods. For the six benchmarks that we studied the
average misprediction rate is 26% lower for the proposed
predictor than for the most aggressive previously
proposed multiple-branch predictor.
2. Previous work
A number of methods for fetching multiple basic
blocks per cycle have been proposed. Yeh et al. [12]
proposed a Branch Address Cache that predicted multiple
branch target addresses every cycle. Conte et al. [3]
proposed an interleaved branch target buffer to predict
multiple branch targets and detect short forward branches
that stay within the same cache line. Both these methods
use conventional instruction caches, and both fetch
multiple lines based on multiple branch predictions.
Then, after fetching, blocks of instructions from different
lines have to be selected, aligned and combined - this can
lead to considerable delay following instruction fetch. It
is this complex logic and delay in the primary pipeline
that the trace cache is intended to remove. Trace caches
[9,10] combine blocks of instructions prior to storing
them in the cache. Then, they can be read as a block and
fed up the pipeline without having to pass through
complex steering logic.
Branch prediction in some form is a fundamental part
of next trace prediction (either implicitly or explicitly).
Hardware branch predictors predict the outcome of
branches based on previous branch behavior. At the heart
of most branch predictors is a Pattern History Table
(PHT), typically containing two-bit saturating counters
[11]. The simplest way to associate a counter with a
branch instruction is to use some bits from the PC address
of the branch, typically the least significant bits, to index
into the PHT [11]. If the counter's value is two or three,
the branch is predicted to be taken, otherwise the branch
is predicted to be not taken.
Correlated predictors can increase the accuracy of
branch prediction because the outcome of a branch tends
to be correlated with the outcome of previous branches
[8,13]. The correlated predictor uses a Branch History
Register (BHR). The BHR is a shift register that is
usually updated by shifting in the outcome of branch
instructions - a one for taken and a zero for not taken. In
a global correlated predictor there is a single BHR that is
updated by all branches. The BHR is combined with
some bits (possibly zero) from a branch's PC address,
either by concatenating or using an exclusive-or function,
to form an index into the PHT. With a correlated
predictor a PHT entry is associated not only with a branch
instruction, but with a branch instruction in the context of
a specific BHR value. When the BHR alone is used to
index into the PHT, the predictor is a GAg predictor [13].
When an exclusive-or function is used to combine an
equal number of bits from the BHR and the branch PC
address, the predictor is a GSHARE predictor [6].
GSHARE has been shown to offer consistently good
prediction accuracy.
The mapping of instructions to PHT entries is
essentially implemented by a simple hashing function that
does not detect or avoid collisions. Aliasing occurs when
two unrelated branch instructions hash to the same PHT
entry. Aliasing is especially a problem with correlated
predictors because a single branch may use many PHT
entries depending on the value of the BHR, thus
increasing contention.
In order to support simultaneous fetching of multiple
basic blocks, multiple branches must be predicted in a
single cycle. A number of modifications to the correlated
predictor discussed above have been proposed to support
predicting multiple branches at once. Franklin and Dutta
proposed subgraph oriented branch prediction
mechanisms that uses local history to form a prediction
that encodes multiple branches. Yeh, et al. [13] proposed
modifications to a GAg predictor to multiport the
predictor and produce multiple branch predictions per
cycle. Rotenberg et al. [10] also used the modified GAg
for their trace cache study.
Recently, Patel et al. [9] proposed a multiple branch
predictor tailored to work with a trace cache. The
predictor attempts to achieve the advantages of a
GSHARE predictor while providing multiple predictions.
The predictor uses a BHR and the address of the first
instruction of a trace, exclusive-ored together, to index
into the PHT. The entries of the PHT have been modified
to contain multiple two-bit saturating counters to allow
simultaneous prediction of multiple branches. The
predictor offers superior accuracy compared with the
multiported GAg predictor, but does not quite achieve the
overall accuracy of a single branch GSHARE predictor.
Nair proposed "path-based" prediction, a form of
correlated branch prediction that has a single branch
history register and prediction history table. The
innovation is that the information stored in the branch
history register is not the outcome of previous branches,
but their truncated PC addresses. To make a prediction, a
few bits from each address in the history register as well
as a few bits from the current PC address are concatenated
to form an index into the PHT. Hence, a branch is
predicted using knowledge of the sequence, or path, of
instructions that led up to it. This gives the predictor
more specific information about prior control flow than
the taken/not taken history of branch outcomes. Jacobson
et al. [5] refined the path-based scheme and applied it to
next task prediction for multiscalar processors. It is an
adaptation of the multiscalar predictor that forms the core
of the path-based next trace predictor presented here.
3. Path-based next trace predictors
We consider predictors designed specifically to work
with trace caches. They predict traces explicitly, and in
doing so implicitly predict the control instructions within
the trace. Next trace predictors replace the conventional
branch predictor, branch target buffer (BTB) and return
address stack (RAS). They have low latency, and are
capable of making a trace prediction every cycle. We
show they also offer better accuracy than conventional
correlated branch predictors.
3.1. Naming of traces
In theory, a trace can be identified by all the PCs in
the trace, but this would obviously be expensive. A
cheaper and more practical method is to use the PC value
for the first instruction in the trace combined with the
outcomes of conditional branches embedded in the trace.
This means that indirect jumps can not be internal to a
trace. We use traces with a maximum length of 16
instructions. For accessing the trace cache we use the
following method. We assume a 36 bit identifier,
to identify the starting PC and six bits to encode up to six
conditional branches. The limit of six branches is
somewhat arbitrary and is chosen because we observed
that length 16 traces almost never have more than six
branches. It is important to note that this limit on
branches is not required to simplify simultaneous multiple
branch prediction, as is the case with trace predictors
using explicit branch prediction.
3.2. Correlated predictor
The core of the next trace predictor uses correlation
based on the history of the previous traces. The
identifiers of the previous few traces represent a path
history that is used to form an index into a prediction
table; see Figure 2. Each entry in the table consists of the
identifier of the predicted trace (PC branch outcomes),
and a two-bit saturating counter. When a prediction is
correct the counter is incremented by one. When a
prediction is incorrect and the counter is zero, the
predicted trace will be replaced with the actual trace.
Otherwise, the counter is decremented by two and the
predicted trace entry is unchanged. We found that the
increment-by-1, decrement-by-2 counter gives slightly
better performance than either a one bit or a conventional
two-bit counter.
HISTORY REGISTER
Predicted
Trace ID
Trace ID cnt
Hashed ID Hashed ID Hashed ID Hashed ID
Generation
Hashing
Function
Figure
Correlated predictor
Path history is maintained as a shift register that
contains hashed trace identifiers (Figure 2). The
hashing function uses the outcome of the first two
conditional branches in the trace identifier as the least
significant two bits, the two least significant bits of the
starting PC as the next two bits, the upper bits are formed
by taking the outcomes of additional conditional branch
outcomes and exclusive-oring them with the next least
significant bits of the starting PC. Beyond the last
conditional branch a value of zero is used for any
remaining branch outcome bits.
The history register is updated speculatively with
each new prediction. In the case of an incorrect
prediction the history is backed up to the state before the
bad prediction. The prediction table is updated only after
the last instruction of a trace is retired - it is not
speculatively updated.
O bits
D back ID
Width bits
Figure
3 Index generation mechanism
Ideally the index generation mechanism would simply
concatenate the hashed identifiers from the history register
to form the index. Unfortunately this is sometimes not
practical because the prediction table is relatively small so
the index must be restricted to a limited number of bits.
The index generation mechanism is based on the
method developed to do inter-task prediction for
multiscalar processors [5]. The index generation
mechanism uses a few bits from each of the hashed trace
identifiers to form an index. The low order bits of the
hashed trace identifiers are used. More bits are used from
more recent traces. The collection of selected bits from
all the traces may be longer than the allowable index, in
which case the collection of bits is folded over onto itself
using an exclusive-or function to form the index. In [5],
the "DOLC" naming convention was developed for
specifying the specific parameters of the index generation
mechanism. The first variable 'D'epth is the number of
traces besides the last trace that are used for forming the
index. The other three variables are: number of bits from
'O'lder traces, number of bits from the `L'ast trace and
the number of bits from the 'C'urrent. In the example
shown in Figure 3 the collection of bits from the trace
identifiers is twice as long as the index so it is folded in
half and the two halves are combined with an exclusive-
or. In other cases the bits may be folded into three parts,
or may not need to be folded at all.
3.3. Hybrid predictor
If the index into the prediction table reads an entry
that is unrelated to the current path history the prediction
will almost certainly be incorrect. This can occur when
the particular path has never occurred before, or because
the table entry has been overwritten by unrelated path
history due to aliasing. We have observed that both are
significant, but for realistically sized tables aliasing is
usually more important. In branch prediction, even a
randomly selected table entry typically has about a 50%
chance of being correct, but in the case of next trace
prediction the chances of being correct with a random
table entry is very low.
To address this issue we operate a second, smaller
predictor in parallel with the first (Figure 4). The
secondary predictor requires a shorter learning time and
suffers less aliasing pressure. The secondary predictor
uses only the hashed identifier of the last trace to index its
table. The prediction table entry is similar to the one for
the correlated predictor except a 4 bit saturating counter is
used that decrements by 8 on a misprediction. The reason
for the larger counter will be discussed at the end of this
section.
CORRELATING TABLE
HISTORY REGISTER
Prediction
Trace ID cnt
Hashed ID Hashed ID Hashed ID Hashed ID
Hashing
Function
Figure
4 Hybrid predictor
To decide which predictor to use for any given
prediction, a tag is added to the table entry in the
correlated predictor. The tag is set with the low 10 bits of
the hashed identifier of the immediately preceding trace at
the time the entry is updated. A ten bit tag is sofficient to
eliminate practically all unintended aliasing When a
prediction is being made, the tag is checked against the
hashed identifier of the preceding trace, if they match the
correlated predictor is used; otherwise the secondary
predictor is used. This method increases the likelihood
that the correlated predictor corresponds to the correct
context when it is used. This method also allows the
secondary table to make a prediction when the context is
very limited, i.e. under startup conditions.
The hybrid predictor naturally reduces aliasing
pressure somewhat, and by modifying it slightly, aliasing
pressure can be further reduced. If the 4-bit counter of the
secondary predictor is saturated, its prediction is used, and
more importantly, when it is correct the correlated
predictor is not updated. This means if a trace is always
followed by the same successor the secondary predictor
captures this behavior and the correlated predictor is not
polluted. This reduces the number of updates to the
correlated predictor and therefore the chances of aliasing.
The relatively large counter, 4-bits, is used to avoid giving
up the opportunity to use the correlated predictor unless
there is high probability that a trace has a single successor.
3.4. Return history stack (RHS)
The accuracy of the predictor is further increased by a
new mechanism, the return history stack (RHS). A field is
added to each trace indicating the number of calls it
contains. If the trace ends in a return, the number of calls
is decremented by one. After the path history is updated,
if there are any calls in the new trace, a copy of the most
recent history is made for each call and these copies are
pushed onto a special hardware stack. When there is a
trace that ends in a return and contains no calls, the top of
the stack is popped and is substituted for part of the
history. One or two of the most recent entries from the
current history within the subroutine are preserved, and
the entries from the stack replace the remaining older
entries of the history. When there are five or fewer entries
in the history, only the most recent hashed identifier is
kept. When there are more than five entries the two most
recent hashed identifiers are kept.
HISTORY REGISTER
hashed ID hashed ID hashed ID hashed ID
POP
Figure
5 Return history stack implementation
With the RHS, after a subroutine is called and has
returned, the history contains information about what
happened before the call, as well as knowledge of the last
one or two traces of the subroutine. We found that the
RHS can significantly increase overall predictor accuracy.
The reason for the increased accuracy is that control flow
in a program after a subroutine is often tightly correlated
to behavior before the call. Without the RHS the
information before the call is often overflowed by the
control flow within a subroutine. We are trying to achieve
a careful balance of history information before the call
versus history information within the call. For different
benchmarks the optimal point varies. We found that
configurations using one or two entries from the
subroutine provide consistently good behavior.
The predictor does not use a return address stack
(RAS), because it requires information on an instruction
level granularity, which the trace predictor is trying to
avoid. The RHS can partly compensate for the absence of
the RAS by helping in the initial prediction after a return.
If a subroutine is significantly long it will force any pre-
call information out of the history register, hence
determining the calling routine, and therefor where to
return, would be much harder without the RHS.
4. Simulation methodology
4.1. Simulator
To study predictor performance, trace driven
simulation with the Simplescalar tool set is used [1].
Simplescalar uses an instruction set largely based on
MIPS, with the major deviation being that delayed
branches have been replaced with conventional branches.
We use the Gnu C compiler that targets Simplescalar.
The functional simulator of the Simplescalar instruction
set is used to produce a dynamic stream of instructions
that is fed to the prediction simulator.
For most of this work we considered the predictor in
isolation, using immediate updates. A prediction of the
next trace is made and the predictor is updated with the
actual outcome before the next prediction is made. We
also did simulations with an execution engine. This
allows updates to be performed taking execution latency
into account. We modeled an 8-way out-of-order issue
superscalar processor with a 64 instruction window. The
processor had a 128KB trace cache, a 64KB instruction
cache, and a 4-ported 64KB data cache. The processor
has 8 symmetric functional units and supports speculative
memory operations.
4.2. Trace selection
For our study, we used traces that are a maximum of
instructions in length and can contain up to six
branches. The limit on the number of branches is imposed
only by the naming convention of traces. Any control
instruction that has an indirect target can not be embedded
into a trace, and must be at the end of a trace. This means
that some traces will be shorter than the maximum length.
As mentioned earlier, instructions with indirect targets are
not embedded to allow traces to be uniquely identified by
their starting address and the outcomes of any conditional
branches.
We used very simple trace selection heuristics. More
sophisticated trace selection heuristics are possible and
would significantly impact the behavior of the trace
predictor. A study of the relation of trace selection and
trace predictability is beyond the scope of this paper.
4.3. Benchmarks
We present results from six SpecInt95 benchmarks:
compress, gcc, go, jpeg, m88ksim and xlisp. All results
are based on runs of at least 100 million instructions.
Table
summary
Benchmark Input number
of instr.
avg.
trace
length
traces
compress 400000
queens 7 first 100
million
12.4 1393
5. Performance
5.1. Sequential branch predictor
For reference we first determined the trace prediction
accuracy that could be achieved by taking proven control
flow prediction components and predicting each control
instruction sequentially. In sequential prediction each
branch is explicitly predicted and at the time of the
prediction the outcomes of all previous branches are
known. This is useful for comparisons although it is not
realizable because it would require multiple accesses to
predict a single trace and requires knowledge of the
branch addresses within the trace. The best multiple
branch predictors to date have attempted to approximate
the behavior of this conceptual sequential predictor.
We used a 16-bit GSHARE branch predictor, a
perfect branch target buffer for branches with PC-relative
and absolute address targets, a 64K entry correlated
branch target buffer for branches with indirect targets [2],
and a perfect return address predictor. All of these
predictors had ideal (immediate) updates. When
simulating this mechanism, if one or more predictions
within a trace was incorrect we counted it as one trace
misprediction. This configuration represents a very
aggressive, ideal predictor. The prediction accuracy of
this idealized sequential prediction is given in Table 2.
The mean of the trace misprediction rate is 12.1%. We
show later that our proposed predictor can achieve levels
of prediction accuracy significantly better than those
achievable by this idealized sequential predictor. In the
results section we refer to the trace prediction accuracy of
the idealized sequential predictor as "sequential."
The misprediction rate for traces tends to be lower
than that obtained by simply multiplying the branch
misprediction rate by the number of branches because
branch mispredictions tend to be clustered. When a trace
is mispredicted, multiple branches within the same trace
are often mispredicted. Xlisp is the exception, with hard
to predict branches tending to be in different traces. With
the aggressive target prediction mechanisms none of the
benchmarks showed substantial target misprediction.
Table
Prediction accuracy for sequential predictors
Benchmark 16-bit Gshare
branch
misprediction
Number of
Branches
per Trace
Mispredic
tion of
traces
compress 9.2 2.1 17.9
gcc 8.0 2.1 14.0
go 16.6 1.8 24.5
jpeg 6.9 1.0 6.7
xlisp 3.2 1.9 6.5
5.2. Performance with unbounded tables
To determine the potential of path-based next trace
prediction we first studied performance assuming
unbounded tables. In this study, each unique sequence of
trace identifiers maps to its own table entry. I.e. there is
no aliasing.
We consider varying depths of trace history, where
depth is the number of traces, besides the most recent
trace, that are combined to index the prediction table. For
a depth of zero only the identifier of the most recent trace
is used. We study history depths of zero through seven.
Figure
6 presents the results for unbounded tables, the
mean of the misprediction rate is 8.0% for the RHS
predictor at the maximum depth. For comparisons, the
"sequential" predictor is based on a 16-bit Gshare
predictor that predicts all conditional branches
sequentially. For all the benchmarks the proposed path-based
predictor does better than the idealized sequential
predictor. On average, the misprediction rate is 34%
lower for the proposed predictor. In the cases of gcc and
go the predictor has less than half the misprediction rate
of the idealized sequential predictor.
e
Correlated
Hybrid
RHS
Sequential
Depth of History
Misprediction
Rate
Correlated
Hybrid
RHS
Sequential
GO14182226
Depth of History
Misprediction
Rate
Correlated
Hybrid
RHS
Sequential
Depth of History
Misprediction
Rate
Correlated
Hybrid
RHS
Sequential
Depth of History
Misprediction
Rate
Correlated
Hybrid
RHS
Sequential
Depth of History
Misprediction
Rate
Correlated
Hybrid
RHS
Sequential
Figure
6 Next trace prediction with unbounded tables
For all benchmarks, the hybrid predictor has a higher
prediction accuracy than using the correlated predictor
alone. The benchmarks with more static traces see a
larger advantage from the hybrid predictor because they
contain more unique sequences of traces. Because the
table size is unbounded the hybrid predictor is not
important for aliasing, but is important for making
predictions when the correlated predictor entry is cold.
For four out of the six benchmarks adding the return
history stack (RHS) increases prediction accuracy.
Furthermore, the four improved benchmarks see a more
significant increase due to the RHS than the two
benchmarks hurt by the RHS see a decrease. For
benchmark compress the predictor does better without the
RHS. For compress, the information about the subroutine
being thrown away by the RHS is more important than the
information before the subroutine that is being saved.
Xlisp extensively uses recursion, and to minimize
overhead it uses unusual control flow to backup quickly to
the point before the recursion without iteratively
performing returns. This behavior confuses the return
history stack because there are a number of calls with no
corresponding returns. However, it is hard to determine
how much of the performance loss of RHS with xlisp is
caused by this problem and how much is caused by loss of
information about the control flow within subroutines.
5.3. Performance with bounded tables
We now consider finite sized predictors. The table
for the correlated predictor is the most significant
component with respect to size. We study correlated
predictors with tables of 2 14 , 2 15 and 2 16 entries. For each
size we consider a number of configurations with different
history depths. The configurations for the index
generation function were chosen based on trial-and-error.
Although better configurations are no doubt possible we
do not believe differences would be significant.
2^14 entries
entries
entries
Infinite
Sequential
Depth of History
Misprediction
Rate
2^14 entries
entries
entries
Infinite
Sequential
Depth of History
Misprediction
Rate
2^14 entries
entries
entries
Infinite
Sequential
Depth of History
Misprediction
Rate
2^14 entries
entries
entries
Infinite
Sequential
Depth of History
Misprediction
Rate
2^14 entries
entries
entries
Infinite
Sequential
Depth of History
Misprediction
Rate
2^14 entries
entries
entries
Infinite
Sequential
Figure
7 Next trace prediction
We use a RHS that has a maximum depth of 128.
This depth is more than sufficient to handle all the
benchmarks except for the recursive section of xlisp,
where the predictor is of little use, anyway.
Performance results are in Figure 7. Three of the
benchmarks stress the finite-sized predictors: gcc, go and
jpeg. In these predictors the deviation from the
unbounded tables is very pronounced, as is the deviation
between the different table sizes. As expected, the
deviation becomes more pronounced with longer histories
because there are more unique sequences of trace
identifiers being used and, therefore, more aliasing.
Go has the largest number of unique sequences of
trace identifiers, and apparently suffers from aliasing
pressure the most. At first, as history depth is increased
the miss rate goes down. As the history depth continues
to increase, the number of sequences competing for the
finite size table increases aliasing. The detrimental effects
of aliasing eventually starts to counter the gain of going to
deeper histories and at some point dominates and causes a
negative effect for increased history depth. The smaller
the table size, the sooner the effects of aliasing start to
become a problem. It is important to focus on the
behavior of this benchmark and the other two larger
benchmarks - gcc and jpeg, because in general the other
benchmarks probably have relatively small working sets
compared to most realistic programs.
We see that for realistic tables, the predictor can
achieve very high prediction accuracies. In most cases,
the predictor achieves miss rates significantly below the
idealized sequential predictor. The only benchmark
where the predictor can not do better than sequential
prediction is for a small, 2 14 entry, table for jpeg. But
even in this case it can achieve performance very close to
the sequential, and probably closer than a realistic
implementation of Gshare modified for multiple branches
per cycle throughput. For our predictor the means of the
mispredict rates are 10.0%, 9.5% and 8.9% for the
maximum depth configuration with 2 14 , 2 15 and 2
tables respectively. These are all significantly below the
12.1% misprediction rate of the sequential predictor, 26%
lower for the 2 predictor.
Table
3 Index generation configurations used
Depth D-O-L-C for
Thus far simulation results have used immediate
updates. In a real processor the history register would be
updated with each predicted trace, and the history would
be corrected when the predictor backs up due to a
misprediction. The table entry would not be updated until
the last instruction of a trace has retired.
Table
4 Impact of real updates
Benchmark Misprediction
with ideal updates
Misprediction
with real update
compress 5.8 5.8
go 9.3 9.3
jpeg 3.5 3.6
2.4 2.1
xlisp 4.7 4.8
To make sure this does not make a significant impact
on prediction accuracy, we ran a set of simulations where
an execution engine was simulated. The configuration of
the execution engine is discussed in section 4.1. The
predictor being modeled has 2 16 entries and a 7-3-6-8
DOLC configuration. Table 4 shows the impact of
delayed updates, and it is apparent that delayed updates
are not significant to the performance of the predictor. In
one case, m88ksim, the delayed updates actually increased
prediction accuracy. The delayed updates has the effect
of increasing the amount of hysteresis in the prediction
table which in some cases can increase performance.
5.5. A cost-reduced predictor
The cost of the proposed predictor is primarily a
function of the size of the correlated predictor's table.
The size of the correlated predictor's table is the number
of entries multiplied by the size of an entry. The size of
an entry is 48 bits: 36 bits to encode a trace identifier, two
bits for the counter plus 10 bits for the tag.
A much less expensive predictor can be constructed,
however, by observing that before the trace cache can be
accessed, the trace identifier read from the prediction
table must be hashed to form a trace cache index. For
practical sized trace caches this index will be in the range
of 10 bits. Rather than storing the full trace identifier, the
hashed cache index can be stored in the table, instead.
This hashed index can be the same as the hashed
identifier that is fed into the history register (Figure 2).
That is, the Hashing Function can be moved to the input
side of the prediction table to hash the trace identifier
before it is placed into the table. This modification should
not affect prediction accuracy in any significant way and
reduces the size of the trace identifier field from 36 bits to
bits. The full trace identifier is still stored in the trace
cache as part of its entry and is read out as part of the
trace cache access. The full trace identifier is used during
execution to validate that the control flow implied by the
trace is correct.
6. Predicting an alternate trace
Along with predicting the next trace, an alternate
trace can be predicted at the same time. This alternate
trace can simplify and reduce the latency for recovering
when it is determined that a prediction is incorrect. In
some implementations this may allow the processor to
find and fetch an alternate trace instead of resorting to
building a trace from scratch.
Alternate trace prediction is implemented by adding
another field to the correlated predictor. The new field
contains the identifier of the alternate prediction. When
the prediction of the correlated predictor is incorrect the
alternate prediction field is updated. If the saturating
counter is zero the identifier in the prediction field is
moved to the alternate field, the prediction field is then
updated with the actual outcome. If the saturating counter
is non-zero the identifier of the actual outcome is written
into the alternate field.
Figure
8 shows the performance of the alternate trace
predictor for two representative benchmarks. The graphs
show the misprediction rate of the primary 2 16 entry table
predictor as well as the rate at which both the primary and
alternate are mispredicted. A large percent of the
mispredictions by the predictor are caught by the alternate
prediction. For compress, 2/3 of the mispredictions are
caught by the alternate, for gcc it is slightly less than half.
It is notable that for alternate prediction the aliasing effect
quickly dominates the benefit of more history because it
does not require as much history to make a prediction of
the two most likely traces, so the benefit of more history is
significantly smaller.
There are two reasons alternate trace prediction
works well. First, there are cases where some branch is
not heavily biased; there may be two traces with similar
likelihood. Second, when there are two sequences of
traces aliased to the same prediction entry, as one
sequence displaces the other, it moves the other's likely
prediction to the alternate slot. When a prediction is made
for the displaced sequence of traces, and the secondary
predictor is wrong, the alternate is likely to be correct.
Depth of History
Misprediction
Rate
Primary
Alternate
Depth of History
Misprediction
Rate
Primary
Alternate
Figure
Alternate trace prediction accuracy
7.
Summary
We have proposed a next trace predictor that treats
the traces as basic units and explicitly predicts sequences
of traces. The predictor collects histories of trace
sequences and makes predictions based on these histories.
In addition to the basic predictor we proposed
enhancements to reduce performance losses due to cold
starts, procedure call/returns, and the interference in the
prediction table. The predictor yields consistent and
substantial improvement over previously proposed,
multiple-branch-prediction methods. On average the
predictor had a 26% lower mispredict rate than the most
aggressive previously proposed multiple-branch predictor.
Acknowledgments
This work was supported in part by NSF Grant MIP-
9505853 and the U.S. Army Intelligence Center and Fort
Huachuca under Contract DAPT63-95-C-0127 and ARPA
order no. D346. The views and conclusions contained
herein are those of the authors and should not be
interpreted as necessarily representing the official policies
or endorsement, either expressed or implied, of the U.S.
Army Intelligence Center and For Huachuca, or the U.S.
Government.
--R
"Evaluating Future Microprocessors: The SimpleScalar Tool Set,"
"Target Prediction for Indirect Jumps,"
"Optimization of Instruction Fetch Mechanisms for High Issue Rates,"
"Control Flow Prediction with Tree-Like Subgraphs for Superscalar Processors,"
"Control Flow Speculation in Multiscalar Processors,"
"Combining Branch Predictors,"
"Dynamic Path-Based Branch Correlation,"
"Improving the Accuracy of Dynamic Branch Prediction Using Branch Correlation,"
"Critical Issues Regarding the Trace Cache Fetch Mechanism."
"Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching,"
"A Study of Branch Prediction Strategies,"
"Increasing the Instruction Fetch Rate via Multiple Branch Prediction and a Branch Address Cache,"
"Two-Level Adaptive Branch Prediction,"
--TR
Two-level adaptive training branch prediction
Improving the accuracy of dynamic branch prediction using branch correlation
Increasing the instruction fetch rate via multiple branch prediction and a branch address cache
Optimization of instruction fetch mechanisms for high issue rates
Dynamic path-based branch correlation
Control flow prediction with tree-like subgraphs for superscalar processors
Trace cache
Target prediction for indirect jumps
A study of branch prediction strategies
Control Flow Speculation in Multiscalar Processors
--CTR
Trace processors, Proceedings of the 30th annual ACM/IEEE international symposium on Microarchitecture, p.138-148, December 01-03, 1997, Research Triangle Park, North Carolina, United States
independence in trace processors, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.4-15, November 16-18, 1999, Haifa, Israel
Juan C. Moure , Domingo Bentez , Dolores I. Rexachs , Emilio Luque, Wide and efficient trace prediction using the local trace predictor, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Michael Behar , Avi Mendelson , Avinoam Kolodny, Trace cache sampling filter, ACM Transactions on Computer Systems (TOCS), v.25 n.1, p.3-es, February 2007
Quinn Jacobson , James E. Smith, Trace preconstruction, ACM SIGARCH Computer Architecture News, v.28 n.2, p.37-46, May 2000
Ryan Rakvic , Bryan Black , John Paul Shen, Completion time multiple branch prediction for enhancing trace cache performance, ACM SIGARCH Computer Architecture News, v.28 n.2, p.47-58, May 2000
Bryan Black , Bohuslav Rychlik , John Paul Shen, The block-based trace cache, ACM SIGARCH Computer Architecture News, v.27 n.2, p.196-207, May 1999
Oliverio J. Santana , Alex Ramirez , Josep L. Larriba-Pey , Mateo Valero, A low-complexity fetch architecture for high-performance superscalar processors, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.2, p.220-245, June 2004
Young , Michael D. Smith, Better global scheduling using path profiles, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.115-123, November 1998, Dallas, Texas, United States
Jared Stark , Marius Evers , Yale N. Patt, Variable length path branch prediction, ACM SIGPLAN Notices, v.33 n.11, p.170-179, Nov. 1998
Bryan Black , Brian Mueller , Stephanie Postal , Ryan Rakvic , Noppanunt Utamaphethai , John Paul Shen, Load execution latency reduction, Proceedings of the 12th international conference on Supercomputing, p.29-36, July 1998, Melbourne, Australia
Paramjit S. Oberoi , Gurindar S. Sohi, Parallelism in the front-end, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Zhang , Rajiv Gupta, Whole Execution Traces, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.105-116, December 04-08, 2004, Portland, Oregon
Yoav Almog , Roni Rosner , Naftali Schwartz , Ari Schmorak, Specialized Dynamic Optimizations for High-Performance Energy-Efficient Microarchitecture, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.137, March 20-24, 2004, Palo Alto, California
Sang-Jeong Lee , Pen-Chung Yew, On Augmenting Trace Cache for High-Bandwidth Value Prediction, IEEE Transactions on Computers, v.51 n.9, p.1074-1088, September 2002
Oliverio J. Santana , Ayose Falcn , Alex Ramirez , Mateo Valero, Branch predictor guided instruction decoding, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
Pedro Marcuello , Antonio Gonzlez, Clustered speculative multithreaded processors, Proceedings of the 13th international conference on Supercomputing, p.365-372, June 20-25, 1999, Rhodes, Greece
Karthik Sundaramoorthy , Zach Purser , Eric Rotenburg, Slipstream processors: improving both performance and fault tolerance, ACM SIGARCH Computer Architecture News, v.28 n.5, p.257-268, Dec. 2000
Kapil Vaswani , Matthew J. Thazhuthaveetil , Y. N. Srikant, A Programmable Hardware Path Profiler, Proceedings of the international symposium on Code generation and optimization, p.217-228, March 20-23, 2005
Karthik Sundaramoorthy , Zach Purser , Eric Rotenberg, Slipstream processors: improving both performance and fault tolerance, ACM SIGPLAN Notices, v.35 n.11, p.257-268, Nov. 2000
Roni Rosner , Micha Moffie , Yiannakis Sazeides , Ronny Ronen, Selecting long atomic traces for high coverage, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Trace Cache Microarchitecture and Evaluation, IEEE Transactions on Computers, v.48 n.2, p.111-120, February 1999
Michele Co , Dee A. B. Weikle , Kevin Skadron, Evaluating trace cache energy efficiency, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.4, p.450-476, December 2006
Roni Rosner , Yoav Almog , Micha Moffie , Naftali Schwartz , Avi Mendelson, Power Awareness through Selective Dynamically Optimized Traces, ACM SIGARCH Computer Architecture News, v.32 n.2, p.162, March 2004
James R. Larus, Whole program paths, ACM SIGPLAN Notices, v.34 n.5, p.259-269, May 1999
Zhang , Rajiv Gupta, Whole execution traces and their applications, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.3, p.301-334, September 2005
Lucian Codrescu , D. Scott Wills , James Meindl, Architecture of the Atlas Chip-Multiprocessor: Dynamically Parallelizing Irregular Applications, IEEE Transactions on Computers, v.50 n.1, p.67-82, January 2001
Alex Ramirez , Oliverio J. Santana , Josep L. Larriba-Pey , Mateo Valero, Fetching instruction streams, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Ahmad Zmily , Christos Kozyrakis, Block-aware instruction set architecture, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.3, p.327-357, September 2006
Kevin Skadron , Pritpal S. Ahuja , Margaret Martonosi , Douglas W. Clark, Improving prediction for procedure returns with return-address-stack repair mechanisms, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.259-271, November 1998, Dallas, Texas, United States
Kevin Skadron , Pritpal S. Ahuja , Margaret Martonosi , Douglas W. Clark, Branch Prediction, Instruction-Window Size, and Cache Size: Performance Trade-Offs and Simulation Techniques, IEEE Transactions on Computers, v.48 n.11, p.1260-1281, November 1999
A. Mahjur , A. H. Jahangir , A. H. Gholamipour, On the performance of trace locality of reference, Performance Evaluation, v.60 n.1-4, p.51-72, May 2005
Shashidhar Mysore , Banit Agrawal , Timothy Sherwood , Nisheeth Shrivastava , Subhash Suri, Profiling over Adaptive Ranges, Proceedings of the International Symposium on Code Generation and Optimization, p.147-158, March 26-29, 2006
Matt T. Yourst , Kanad Ghose, Incremental Commit Groups for Non-Atomic Trace Processing, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.67-80, November 12-16, 2005, Barcelona, Spain
Hao , Po-Yung Chang , Marius Evers , Yale N. Patt, Increasing the Instruction Fetch Rate via Block-Structured Instruction Set Architectures, International Journal of Parallel Programming, v.26 n.4, p.449-478, August 1998 | trace cache;Return History Stack;Next Trace Prediction;Multiple Branch Prediction;Path-Based Prediction |
266806 | Run-time spatial locality detection and optimization. | As the disparity between processor and main memory performance grows, the number of execution cycles spent waiting for memory accesses to complete also increases. As a result, latency hiding techniques are critical for improved application performance on future processors. We present a microarchitecture scheme which detects and adapts to varying spatial locality, dynamically adjusting the amount of data fetched on a cache miss. The Spatial Locality Detection Table, introduced in this paper, facilitates the detection of spatial locality across adjacent cached blocks. Results from detailed simulations of several integer programs show significant speedups. The improvements are due to the reduction of conflict and capacity misses by utilizing small blocks and small fetch sizes when spatial locality is absent, and the prefetching effect of large fetch sizes when spatial locality exists. | Introduction
This paper introduces an approach to solving the growing
memory latency problem [2] by intelligently exploiting spatial
locality. Spatial locality refers to the tendency for neighboring
memory locations to be referenced close together in
time. Traditionally there have been two main approaches
used to exploit spatial locality. The first approach is to
use larger cache blocks, which have a natural prefetching
effect. However, large cache blocks can result in wasted bus
bandwidth and poor cache utilization, due to fragmentation
and underutilized cache blocks. Both negative effects occur
when data with little spatial locality is cached. The
second common approach is to prefech multiple blocks into
the cache. However, prefetching is only beneficial when the
prefetched data is accessed in cache, otherwise the prefetched
data may displace more useful data from the cache, in addition
to wasting bus bandwidth. Similar issues exist with
allocate caches, which, in effect, prefetch the data in
the cache block containing the written address. Particu-
This technical report is a longer version of [1].
larly when using large block sizes and write allocation, the
amount of prefetching is fixed. However, the spatial locality,
and hence the optimal prefetch amount, varies across and
often within programs.
As the available chip area increases, it is meaningful
to spend more resources to allow intelligent control over
latency-hiding techniques, adapting to the variations in spatial
locality. For numeric programs there are several known
compiler techniques for optimizing data cache performance.
In contrast, integer (non-numeric) programs often have irregular
access patterns that the compiler cannot detect and
optimize. For example, the temporal and spatial locality of
linked list elements and hash table data are often difficult
to determine at compile time. This paper focuses on cache
performance optimization for integer programs. While we
focus our attention on data caches, the techniques presented
here are applicable to instruction caches.
In order to increase data cache effectiveness for integer
programs we are investigating methods of adaptive cache hierarchy
management, where we intelligently control caching
decisions based on the usage characteristics of accessed data.
In this paper we examine the problem of detecting spatial locality
in accessed data, and automatically control the fetch
of multiple smaller cache blocks into all data caches and
buffers. Not only are we able to reduce the conflict and capacity
misses with smaller cache lines and fetch sizes when
spatial locality is absent, but we also reduce cold start misses
and prefetch useful data with larger fetch sizes when spatial
locality is present.
We introduce a new hardware mechanism called the Spatial
Locality Detection Table (SLDT). Each SLDT entry
tracks the accesses to multiple adjacent cache blocks, facilitating
detection of spatial locality across those blocks while
they are cached. The resulting information is later recorded
in the Memory Address Table [3] for long-term tracking of
larger regions called macroblocks. We show that these extensions
to the cache microarchitecture significantly improve the
performance of integer applications, achieving up to 17% and
26% improvements for 100 and 200-cycle memory latencies,
respectively. This scheme is fully compatible with existing
Instruction Set Architectures (ISA).
The remainder of this paper is organized as follows: Section
related work; Section 3 discusses general
spatial locality issues, and a code example from a common
application is used to illustrate the role of spatial locality
and cache line sizes in determining application cache perfor-
mance, as well as to motivate our spatial locality optimization
techniques; Section 4 discusses hardware techniques;
Section 5 presents simulation results; Section 6 performs a
cost analysis of the added hardware; and Section 7 concludes
with future directions.
Related Work
Several studies have examined the performance effects of
cache block sizes [4][5]. One of the studies allowed multiple
consecutive blocks to be fetched with one request [4], and
found that for data caches the optimal statically-determined
fetch size was generally twice the block size. In this work we
also examine fetch sizes larger than the block size, however,
we allow the fetch size to vary based on the detected spatial
locality. Another method allows the number of blocks
fetched on a miss to vary across program execution, but not
across different data [6].
Hardware [7][8][9][10][11] and software [12][13][14]
prefetching methods for uniprocessor machines have been
proposed. However, many of these methods focus on
prefetching regular array accesses within well-structured
loops, which are access patterns primarily found in numeric
codes. Other methods geared towards integer codes [15][16]
focus on compiler-inserted prefetching of pointer targets,
and could be used in conjunction with our techniques.
The dual data cache [17] attempts to intelligently exploit
both spatial and temporal locality, however the temporal
and spatial data must be placed in separate structures, and
therefore the relative amounts of each type of data must
be determined a priori. Also, the spatial locality detection
method was tuned to numeric codes with constant stride
vectors. In integer codes, the spatial locality patterns may
not be as regular. The split temporal/spatial cache [18] is
similar in structure to the dual data cache, however, the run-time
locality detection mechanism is quite different than that
of both the dual data cache and this paper.
3 Spatial Locality
Caches seek to exploit the principle of locality. By storing
a referenced item, caches exploit temporal locality - the tendency
for that item to be rereferenced soon. Additionally, by
storing multiple items adjacent to the referenced item, they
exploit spatial locality - the tendency for neighboring items
to be referenced soon. While exploitation of temporal locality
can result in cache hits for future accesses to a particular
item, exploitation of spatial locality can result in cache hits
for future accesses to multiple nearby items, thus avoiding
the long memory latency for short-term accesses to these
items as well. Traditionally, exploitation of spatial locality
is achieved through either larger block sizes or prefetching
of additional blocks. We define the following terms as they
will be used throughout this paper:
element A data item of the maximum size allowed by the
ISA, which in our system is 8 bytes.
spatial reuse A reference to a cached element other than
the element which caused the referenced element to be
fetched into the cache.
The spatial locality in an application's data set can predict
the effectiveness of spatial locality optimizations. Unfortu-
nately, no quantitative measure of spatial locality exists, and
we are forced to adopt indirect measures. One indirect measure
of the amount of spatial locality is via its inverse rela-
tioship to the distance between references in both space and
time. With this in view, we measured the spatial reuses in
a 64K-byte fully-associative cache with 32-byte lines. This
gives us an approximate time bound (the time taken for a
block to be displaced), and a space bound (within 32-byte
block boundaries). We chose this block size because past
studies have found that 16 or 32-byte block sizes maximize
data cache performance [4]. These measurement techniques
differ from those in [19], which explicitely measure the reuse
distance (in time). Our goal is to measure both the reused
and unused portions of the cache blocks, for different cache
organizations.
Figure
1(a) shows the spatial locality estimates for the
fully-associative cache. The number of dynamic cache blocks
is broken down by the number of 8-byte elements that were
accessed during each block's cache lifetime. Blocks where
only one element is accessed have no spatial locality within
the measured context. This graph does not show the relative
locations of the accessed elements within each 32-byte cache
block.
Figure
1(a) shows that between 13-83% of the cached
blocks have no spatial reuse. Figure 1(b) shows how this
distribution changes for a 16K-byte direct-mapped cache.
In this case between 30-93% of the blocks have no spatial
reuse.
For a 32-byte cache block, over half the time the extra
data fetched into the cache simply wastes bus bandwidth
and cache space. Similar observations have been made for
numeric codes [19]. Therefore, it would be beneficial to tune
the amount of data fetched and cached on a miss to the
spatial locality available in the data. This optimization is
investigated in our work. We discuss several issues involved
with varying fetch sizes, including cost efficient and accurate
spatial locality detection, fetch size choice, and cache
support for varying fetch sizes.
3.1 Code Example
In this section we use a code example from SPEC92 gcc to
illustrate the difficulties involved with static analysis and
annotation of spatial locality information, motivating our
dynamic approach.
One of the main data structures used in gcc is an RTL ex-
pression, or rtx, whose definition is shown in Figure 2. Each
rtx structure contains a two-byte code field, a one-byte mode
field, seven one-bit flags, and an array of operand fields. The
operand array is defined to contain only one four-byte ele-
ment, however, each rtx is dynamically allocated to contain
as many array elements as there are operands, depending on
the rtx code, or RTL expression type. Therefore, each rtx
instance contains eight or more bytes.
In the frequently executed rtx renumbered equal
tine, which is used during jump optimization, two rtx
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_customizer 085.cc1 130.li 134.perl 124.m88ksim
Benchmark
total
blocks
blocks with accesses
to four elements
blocks with accesses
to three elements
blocks with accesses
to two elements
blocks with accesses
to one elements
(a) 64K-byte fully-associative
0%
20%
40%
80%
100%
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_customizer 085.cc1 130.li 134.perl 124.m88ksim
Benchmark
total
blocks
blocks with accesses
to four elements
blocks with accesses
to three elements
blocks with accesses
to two elements
blocks with accesses
to one elements
(b) 16K-byte direct-mapped
Figure
1: Breakdown of blocks cached in L1 data cache by how many 8-byte elements were accessed while each block was
cached. The results for two cache configurations are shown, each with 32-byte blocks.
struct rtxdef
/* The kind of expression this is. */
enum
/* The kind of value the expression has. */
enum machinemode mode : 8;
/* Various bit flags */
unsigned int
unsigned int call :
unsigned int unchanging :
unsigned int volatil :
unsigned int instruct :
unsigned int used :
unsigned integrated :
/* The first element of the operands of this rtx.
The number of operands and their types are controlled
by the 'code' field, according to rtl.def. */
rtunion fld[1];
Common union for an element of an rtx. */
int rtint;
char
struct
struct
enum machinemode rttype;
Figure
2: Gcc rtx Definition
structures are compared to determine if they are equivalent
Figure
3 shows a slightly abbreviated version of the
renumbered equal routine. After checking if the code
and mode fields of the two rtx structures are identical, the
routine then compares the operands, to determine if they
are also identical. Four branch targets in Figure 3 are annotated
with their execution weights, derived from execution
profiles using the SPEC reference input. Roughly 1% of the
time only the code fields of the two rtx structures are compared
before exiting. In this case, only the first two bytes in
each rtx structure is accessed. About 46% of the time x and y
are CONST INT rtx, and only the first operand is accessed.
Therefore, only the first eight bytes of each rtx structure
is accessed, and there is spatial locality within those eight
bytes.
For many other types of RTL expressions, the routine will
use the for loop to iterate through the operands, from last to
first, comparing them until a mismatch is found. In this case
there will be spatial locality, but at a slightly larger distance
(in space) than in the previous case. Most instruction types
contain more than one operand. The most common operand
type in this loop is an RTL expression, which results in a
recursive call to rtx renumbered equal p.
This routine illustrates that the amount of spatial locality
can vary for particular load references, depending on
the function arguments. Therefore, if the original access
into each rtx structure in this routine is a miss, the optimal
amount of data to fetch into the cache will vary correspond-
ingly. For example, if the access GETCODE(y) on line 10
of
Figure
3, which performs the access y-?code, misses in
the L1 cache, the spatial locality in that data depends on
whether the program will later fall into a case body of the
switch statement on line 11 or into the body of the for loop
on line 24, and on the rtx type of x which determines the
initial value of i in the for loop. However, at the time of the
cache miss on line 10 this information is not available, as it
is highly data-dependent. As such, neither static analysis
(if even possible) nor profiling will result in definitive or accurate
spatial locality information for the load instructions.
Dynamic analysis of the spatial locality in the data offers
greater promise. For this routine, dynamic analysis of each
instance accessed in the routine would obtain the most
accurate spatial locality detection. Also, dynamic schemes
do not require profiling, which many users are unwilling to
perform, or ISA changes.
case LABEL_REF:
return (next_real_insn (x->fld[0].rtx) == next_real_insn (y->fld[0].rtx));
case SYMBOL_REF:
19 return x->fld[0].rtstr == y->fld[0].rtstr;
22 /* Compare the elements. If any pair of corresponding elements fail to match, return 0 for the whole thing. */
{
register int j;
26 switch (fmt[i]) {
case 'i':
28 if (x->fld[i].rtint != y->fld[i].rtint) return 0;
29 break;
case 's':
return 0;
36 break;
37 case 'E':
38 . /* Accesses *({x,y}->fld[i].rtvec) */ .
return
43 }
{
3 register int
4 register RTX_CODE
5 register char *fmt;
9 { . /* Rarely entered */ . }
case 'e':
switch (code) {
case PC: case CC0: case ADDR_VEC: case ADDR_DIFF_VEC:
return 0;
34 if (! rtx_renumbered_equal_p (x->fld[i].rtx, y->fld[i].rtx))
14 case CONST_INT:
return x->fld[0].rtint == y->fld[0].rtint;
Exits here
448 times
Exits here
times
Case matches
times Exits here
30014 times
Figure
3: Gcc rtx renumbered equal routine, executed 63173 times.
3.2 Applications
Aside from varying the data cache load fetch sizes, our spatial
locality optimizations could be used to control instruction
cache fetch sizes, write allocate versus no-allocate poli-
cies, and bypass fetch sizes when bypassing is employed. The
latter case is discussed briefly in [3], and is greatly expanded
in this paper. In this paper we examine the application of
these techniques to control the fetch sizes into the L1 and L2
data caches. We also study these optimizations in conjunction
with cache bypassing, a complementary optimization
that also aims to improve cache performance.
4.1 Overview of Prior Work
In this section we briefly overview the concept of a mac-
roblock, as well as the Memory Address Table (MAT), introduced
in an earlier paper [3] and utilized in this work.
We showed that cache bypassing decisions could be effectively
made at run-time, based on the previous usage of the
memory address being accessed. Other bypassing schemes
include [20][21][17][22]. In particular, our scheme dynamically
kept track of the accessing frequencies of memory regions
called macroblocks. The macroblocks are statically-
defined blocks of memory with uniform size, larger than the
cache block size. The macroblock size should be large enough
so that the total number of accessed macroblocks is not excessively
large, but small enough so that the access patterns
of the cache blocks contained within each macroblock are
relatively uniform. It was determined that 1K-byte macroblocks
provide a good cost-performance tradeoff.
In order to keep track of the macroblocks at run time
we use an MAT, which ideally contains an entry for each
macroblock, and is accessed with a macroblock address. To
support dynamic bypassing decisions, each entry in the table
contains a saturating counter, where the counter value represents
the frequency of accesses to the corresponding mac-
roblock. For details on the MAT bypassing scheme see [3].
Also introduced in that paper was an optimization geared
towards improving the efficiency of L1 bypasses, by tracking
the spatial locality of bypassed data using the MAT, and
using that information to determine how much data to fetch
on an L1 bypass. In this paper we introduce a more robust
spatial locality detection and optimization scheme using the
SLDT, which enables much more efficient detection of spatial
locality. Our new scheme also supports fetching varying
amounts of data into both levels of the data cache, both
with and without bypassing. In practice this spatial locality
optimization should be performed in combination with by-
passing, in order to achieve the best possible performance,
as well as to amortize the cost of the MAT hardware. The
cost of the combined hardware is addressed in Section 6,
set 31
set 00
set 01
set
set
tag data
8 bytes
0x000000
0x000000 .
tag data
8 bytes
0x000000
0x000000
Figure
4: Layout of 8-byte subblocks from the 32-byte block
starting at address 0x00000000 in a 512-byte 2-way set-associative
cache with 8-byte lines. The shaded blocks correspond
to the locations of the four 8-byte subblocks.
following the presentation of experimental results.
4.2 Support for Varying Fetch Sizes
The varying fetch size optimization could be supported using
subblocks. In that case the block size is the largest fetch size
and the subblock size is gcd(fetch
where n is the number of fetch sizes supported. Currently,
we only support two power-of-two fetch sizes for each level
of cache, so the subblock size is simply the smaller fetch size.
However, the cache lines will be underutilized when only the
smaller size is fetched.
Instead, we use a cache with small lines, equal to the
smaller fetch size, and optionally fill in multiple, consecutive
blocks when the larger fetch size is chosen. This approach is
similar to that used in some prefetching strategies [23]. As a
result, the cache can be fully utilized, even when the smaller
sizes are fetched. It also eliminates conflict misses resulting
from accesses to different subblocks. However, this approach
makes detection of spatial reuses much more difficult, as will
be described in Section 4.3. Also, smaller block sizes increase
the tag array cost, which is addressed in Section 6.
In our scheme, the max fetch size data is always aligned to
boundaries. As a result, our techniques will
fetch data on either side of the accessed element, depending
on the location of the element within the max fetch size
block. In our experience, spatial locality in the data cache
can be in either direction (spatially) from the referenced element
4.3 Spatial Locality Detection Table
To facilitate spatial locality tracking, a spatial counter, or
sctr, is included in each MAT entry. The role of the sctr
is to track the medium to long-term spatial locality of the
corresponding macroblock, and to make fetch size decisions,
as will be explained in Section 4.4. This counter will be incremented
whenever a spatial miss is detected, which occurs
when portions of the same larger fetch size block of data
reside in the cache, but not the element currently being ac-
cessed. Therefore, a hit might have occurred if the larger
fetch size was fetched, rather than the smaller fetch size. In
our implementation, where multiple cache blocks are filled
when the larger fetch size is chosen, a spatial miss is not
trivial to detect. If the cache is not fully-associative, the
tags for different blocks residing in the same larger fetch size
block will lie in consecutive sets, as shown in Figure 4, where
the data in one 32-byte block is highlighted. Searching for
other cache blocks in the same larger fetch size block of data
will require access to the tags in these consecutive sets, and
thus either additional cycles to access, or additional hardware
support. One possibility is a restructured tag array
design allowing efficient access to multiple consecutive sets
of tags. Alternatively, a separate structure can be used to
detect this information, which is the approach investigated
in this work.
This structure is called the Spatial Locality Detection Table
(SLDT), and is designed for efficient detection of spatial
reuses with low hardware overhead. The role of the SLDT
is to detect spatial locality of data while it is in the cache,
for recording in the MAT when the data is displaced. The
SLDT is basically a tag array for blocks of the larger fetch
size, allowing single-cycle access to the necessary information
Figure
5 shows an overview of how the SLDT interacts
with the MAT and L1 data cache, where the double-arrow
line shows the correspondence of four L1 data cache entries
with a single SLDT entry. In order to track all cache blocks,
the SLDT would need N entries, where N is the number of
blocks in the cache. This represents the worst case of having
fetched only smaller (line) size blocks into the cache, all
from different larger size blocks. However, in order to reduce
the hardware overhead of the SLDT, we use a much smaller
number of entries, which will allow us to capture only the
shorter-term spatial reuses. The same SLDT could be used
to track the spatial locality aspects of all structures at the
same level in the memory hierarchy, such as the data cache,
the instruction cache, and, when we perform bypassing, the
bypass buffer.
The SLDT tags correspond to maximum fetch size
blocks. The sz field is one bit indicating if either the
larger size block was fetched into the cache, or if only
smaller blocks were fetched. The vc (valid count) field is
log(max fetch size=min fetch size) bits in length, and indicates
how many of the smaller blocks in the larger size
block are currently valid in the data cache. The actual number
of valid smaller blocks is vc+1. An SLDT entry will only
be valid for a larger size block when some of its constituent
blocks are currently valid in the data cache. A bit mask
could be used to implement the vc, rather than the counter
design, to reduce the operational complexity. However, for
large maximum to minimum fetch size ratios, a bit mask will
result in larger entries. Finally, the sr (spatial reuse) bit will
be set if spatial reuse is detected, as will be discussed later.
When a larger size block of data is fetched into the cache,
an SLDT entry is allocated (possibly causing the replacement
of an existing entry) and the values of sz and vc are set
to 1 and max fetch size=min fetch size \Gamma 1, respectively.
If a smaller size block is fetched and no SLDT entry currently
exists for the corresponding larger size block, then an entry
is allocated and sz and vc are both initialized to 0. If an entry
already exists, vc is incremented to indicate that there is
now an additional valid constituent block in the data cache.
For both fetch sizes the sr bit is initialized to 0. When a
cache block is replaced from the data cache, the corresponding
SLDT entry is accessed and its vc value is decremented
if it is greater than 0. If vc is already 0, then this was the
only valid block, so the SLDT entry is invalidated. When
s
{
spatial
reuse?
hit?
MAT
sctr
fetch
update sctr with
hit and spatial
reuse results
L1 Data
Cache
addr
tag sz vc sr
Figure
5: SLDT and MAT Hardware
an SLDT entry is invalidated its sr bit is checked to see if
there was any spatial reuse while the data was cached. If
not, the corresponding entry in the MAT is accessed and its
sctr is decremented, effectively depositing the information
in the MAT for longer-term tracking. Because the SLDT is
managed as a cache, entries can be replaced, in which case
the same actions are taken.
An fi (fetch initiator) bit is added to each data cache tag
to help detect spatial hits. The fi bit is set to 1 during
the cache refill for the cache block containing the referenced
element (i.e. the cache block causing the fetch), otherwise
it is reset to 0. Therefore, a hit to any block with a 0 fi bit
is a spatial hit, as this data was fetched into the cache by a
miss to some other element.
Table
1 summarizes the actions taken by the SLDT for
memory accesses. The sr bit, which was initialized to zero,
is set for all types of both spatial misses and spatial hits.
Two types of spatial misses are detected. The first type of
spatial miss occurs when other portions of the same larger
fetch size block were fetched independently, indicated by a
valid SLDT entry with a sz of 0. Therefore, there might
have been a cache hit if the larger size block was fetched, so
the corresponding entry in the MAT is accessed and its sctr
is incremented. The second type can occur when the larger
size block was fetched, but one of its constituent blocks was
displaced from the cache, as indicated by a cache miss and a
valid SLDT entry with a sz of 1. It is not trivial to detect if
this miss is to the element which caused the original fetch,
or to some other element in the larger fetch size block. The
sr bit is conservatively set, but the sctr in the corresponding
MAT entry is not incremented.
A spatial hit can occur in two situations. If the larger size
block was fetched, then the fi bit will only be set for one of
the loaded cache blocks. A hit to any of the loaded cache
blocks without the fi bit set is a spatial hit, as described
earlier. We do not increment the sctr on spatial hits, because
our fetch size was correct. We only update the sctr
when the fetch size should be changed in the future. When
multiple smaller blocks were fetched, a hit to one of these
is also characterized as a spatial hit. This case is detected
by checking if vc is larger than 0 when sz is 0. However,
we do not increment the sctr in this case either because a
spatial miss would have been detected earlier when a second
element in the larger fetch size block was first accessed (and
missed).
Cache SLDT
Access Access fi sz vc Action
miss hit - 0
Cache entry vc ?0
replaced vc == 0 invalidate SLDT entry
SLDT entry replaced sr == 0
or invalidated sr == 1 no action
Table
1: SLDT Actions. A dash indicates that there is no
corresponding value, and a blank indicates that the value
does not matter.
4.4 Fetch Size Decisions
On a memory access, a lookup in the MAT of the corresponding
macroblock entry is performed in parallel with the
data cache access. If an entry is found, the sctr value is compared
to some threshold value. The larger size is fetched if
the sctr is larger than the threshold, otherwise the smaller
size is fetched. If no entry is found, a new entry is allocated
and the sctr value is initialized to the threshold value, and
the larger fetch size is chosen. In this paper the threshold is
50% of the maximum sctr value.
5 Experimental Evaluation
5.1 Experimental Environment
We simulate ten benchmarks, including 026.compress, 072.sc
and 085.cc1 from the SPEC92 benchmark suite using the
reference inputs, and 099.go, 147.vortex, 130.li,
134.perl, and 124.m88ksim from the SPEC95 benchmark
suite using the training inputs. The last two benchmarks
consist of modules from the IMPACT compiler [24] that we
felt were representative of many real-world integer applica-
tions. Pcode, the front end of IMPACT, is run performing
dependence analysis with the internal representation of the
combine.c file from GNU CC as input. lmdes2 customizer,
a machine description optimizer, is run optimizing the SuperSPARC
machine description. These optimizations operate
over linked list and complex data structures, and utilize
hash tables for efficient access to the information.
In order to provide a realistic evaluation of our technique
for future high-performance, high-issue rate systems, we first
optimized the code using the IMPACT compiler [24]. Classical
optimizations were applied, then optimizations were
performed which increase instruction level parallelism. The
code was scheduled, register allocated and optimized for an
eight-issue, scoreboarded, superscalar processor with register
renaming. The ISA is an extension of the HP PA-RISC
instruction set to support compile-time speculation.
We perform cycle-by-cycle emulation-driven simulation on
a Hewlett-Packard PA-RISC 7100 workstation, modelling
the processor and the memory hierarchy (including all related
busses). The instruction latencies used are those of a
Hewlett-Packard PA-RISC 7100, as given in Table 2. The
base machine configuration is described in Table 3.
Since simulating the entire applications at this level of detail
would be impractical, uniform sampling is used to reduce
simulation time [25], however emulation is still performed
Function Latency Function Latency
memory load 2 FP multiply 2
memory store 1 FP divide (single prec.) 8
branch
Table
2: Instruction latencies for simulation experiments.
L1 Icache 32K-byte split-block, direct mapped, 64-byte block
L1 Dcache 16K-byte non-blocking (50 max), direct mapped,
32-byte block, multiported, writeback, no write alloc
L1-L2 Bus 8-byte bandwidth, split-transaction, 4-cycle latency,
returns critical word first
L2 Dcache same as L1 Dcache except: 256K-byte, 64-byte block
System Bus same as L1-L2 Bus except: 100-cycle latency
Issue 8-issue uniform, except 4 memory ops/cycle max
Registers 64 integer, 64 double precision floating-point
Table
3: Base Configuration.
between samples. The simulated samples are 200,000 instructions
in length and are spaced evenly every 20,000,000
instructions, yielding a 1% sampling ratio. For smaller ap-
plications, the time between samples is reduced to maintain
at least 50 samples (10,000,000 instructions). To evaluate
the accuracy of this technique, we simulated several configurations
both with and without sampling, and found that the
improvements reported in this paper are very close to those
obtained by simulating the entire application.
5.2 Macroblock Spatial Locality Variations
Before presenting the performance improvements achieved
by our optimizations, we first examine the accuracy of the
macroblock granularity for tracking spatial locality. It is
important to have accurate spatial locality information in
the MAT for our scheme to be successful. This means that all
data elements in a macroblock should have similar amounts
of spatial locality at each phase of program execution.
After dividing main memory into macroblocks, as described
in Section 4.1, the macroblocks can be further subdivided
into smaller sections, each the size of a 32-byte cache
block. We will simply call these smaller sections blocks. In
order to determine the dynamic cache block spatial locality
behavior, we examined the accesses to each of these blocks,
gathering information twice per simulation sample, or every
100,000 instructions. At the end of each 100,000-instruction
phase, we determined the fraction of times that each block
in memory had at least one spatial reuse each time it was
cached during that phase. We call this the spatial reuse
fraction for that block. Figure 6 shows a graphical representation
of the resulting information for three programs. Each
row in the graph represents a 1K-byte macroblock accessed
in a particular phase. For every phase in which a particular
macroblock was accessed, there will be a corresponding
row. Each row contains one data point for every 32-byte
block accessed during the corresponding phase that lies in
that macroblock. For the purposes of clarity, the rows were
sorted by the average of the block spatial reuse fractions per
macroblock. The averages increase from the bottom to the
top of the graphs. The cache blocks in each macroblock were
also sorted so that their spatial reuse fractions increase from
left to right. Some rows are not full, meaning that not all of
their blocks were accessed during the corresponding phase.
Finally, the cache blocks with spatial reuse fractions falling
within the same range were plotted with the same marker.
Figure
6(a) shows the spatial locality distribution for
026.compress. Most of the blocks, corresponding to the
lighter gray points, have spatial reuse fractions between 0
and 0.25, meaning that there was spatial reuse to those
blocks less than 25% of the time they were cached. Very
few of the blocks, corresponding to the black points, had
spatial reuse more than 75% of the time they were cached.
This represents a fairly optimal scenario, because most of
the macroblocks contain blocks which have approximately
the same amount of reuse. Figure 6(b) shows the distribution
for 134.perl. Around 34% of the macroblocks (IDs 0
to 6500) contain only blocks with little spatial reuse, their
spatial reuse fractions all less than 0.25. About 29% of the
macroblocks (IDs 13500 to 18900) contain only blocks with
large fractions of spatial reuse, their spatial reuse fractions
all over 0.75. About 37% of the macroblocks contain cache
blocks with differing amounts of spatial reuse. The medium
gray points in some of these rows correspond to blocks with
spatial reuse fractions between 0.25 and 0.75. However, this
information does not reveal the time intervals over which
the spatial reuse in these blocks varies. It is possible that in
certain small phases of program execution the spatial locality
behavior is uniform, but that it changes drastically from
one small phase of execution to another. This type of behavior
is possible due to dynamically-allocated data, where
a particular section of memory may be allocated as one type
of data in one part of the program, then freed and reallocated
as another type later. Finally, Figure 6(c) shows the
distribution for 085.gcc, which has similar characteristics to
134.perl, but has more macroblocks with non-uniform spatial
reuse fractions.
5.3 Performance Improvements
In this section we examine the performance improvement, or
the execution cycles eliminated, over the base 8-issue configuration
described in Section 5.1. To support varying fetch
sizes, we use an SLDT and an MAT at each level of the
cache hierarchy. The L1 and L2 SLDTs are direct-mapped
with entries. A large number of simulations showed that
direct-mapped SLDTs perform as well as a fully-associative
design, and that 32 entries perform almost as well as any
larger power-of-two number of entries up to 1024 entries,
which was the maximum size examined. The L1 and L2
MATs utilize 1K-byte macroblocks, and we examine both
one and four-bit sctrs, We first present results for infinite-
entry MATs, then study the effects of limiting the number
of MAT entries.
5.3.1 Static versus Varying Fetch Sizes
The left bar for each benchmark in Figure 7(a) shows the
performance improvement achieved by using 8-byte L1 data
cache blocks with a static 8-byte fetch size, over the base 32-
byte block and fetch sizes. These bars show that the better
choice of block size is highly application-dependent. The
right bars show the improvement achieved by our spatial
locality optimization at the L1 level only, using an 8-byte
data cache block size, and fetching either 8 or 32-bytes
on an L1 data cache miss, depending on the value of the
(a) 026.compress (b) 134.perl (b) 085.gcc
Figure
reuse fractions (srf) for cache-block-sized-data in the accessed macroblocks for three applications.
corresponding sctr. The results show that our scheme is
able to obtain either almost all of the performance, or is
able to outperform, the best static fetch size scheme. In
most cases the 1 and 4-bit sctrs perform similarly, but in one
case the 4-bit sctr achieves almost 2% greater performance
improvement.
The four leftmost bars for each benchmark in Figure 7(b)
show the performance improvement using different L2 data
cache block and (static) fetch sizes, and our L1 spatial locality
optimization with a 4-bit sctr. The base configuration
is again the configuration described in Section 5.1,
which has 64-byte L2 data cache block and fetch sizes.
These bars show that, again, the better static block/fetch
size is highly application-dependent. For example, 134.perl
achieves much better performance with a 256-byte fetch size,
while 026.compress achieves its best performance with a 32-
byte fetch size, obtaining over 14% performance degradation
with 256-byte fetches. The rightmost two bars in Figure 7(b)
show the performance improvement achieved with our L2
spatial locality optimization, which uses a 32-byte L2 data
cache block size and fetches either 32 or 256 bytes on an L2
data cache miss, depending on the value of the corresponding
L2 MAT sctr. Again, our spatial locality optimizations
are able to obtain almost the same or better performance
than the best static fetch size scheme for all benchmarks.
Figure
8 shows the breakdown of processor stall cycles
attributed to different types of data cache misses, as a percentage
of the total base configuration execution cycles. The
left and right bars for each benchmark are the stall cycle
breakdown for the base configuration and our spatial locality
optimization, respectively. The spatial locality optimizations
were performed at both cache levels, using the same
configuration as in Figure 7(b) with a 4-bit sctr. For the
benchmarks that have large amounts of spatial locality, as
indicated from the results of Figure 7, we obtain large reductions
in L2 cold start stall cycles by fetching 256 bytes on
L2 cache misses. The benchmarks with little spatial locality
in the L1 data cache, such as 026.compress and Pcode, obtained
reductions in L1 capacity miss stall cycles from fetching
fewer small cache blocks on L1 misses. In some cases
the L1 cold start stall cycles increase, indicating that the L1
optimizations are less aggressive in terms of fetching more
data, however these increases are generally more than compensated
by reductions in other types of L1 stall cycles. The
conflict miss stall cycles increase for lmdes2 customizer,
because it tends to fetch fewer blocks on an L1 miss, exposing
some conflicts that were interpreted as capacity misses
in the base configuration.
Revisiting the example of Section 3.1, we found that the
access y-?code on line 10 of Figure 3 missed 11,223 times,
fetching bytes for 47% of the misses, and 8 bytes for the
remaining 53%. We also found that on average, 0.99 spatial
hits and only 0.02 spatial misses to the resulting data
occurred per miss, illustrating that our techniques are successfully
choosing the appropriate amount of data to fetch
on a miss.
5.3.2 Set-associative Data Caches
Increasing the set-associativity of the data caches can reduce
the number of conflict misses, which may in turn reduce
the advantage offered by our optimizations. However, the
8.00%
10.00%
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_customizer 085.cc1 130.li 134.perl 124.m88ksim
Benchmark
Improvement
over
Base
L1 static 8-byte block/fetch size
varying fetch (1-bit sctr)
varying fetch (4-bit sctr)
(a) L1 Trends
-15.00%
-10.00%
-5.00%
5.00%
10.00%
15.00%
20.00%
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_customizer 085.cc1 130.li 134.perl 124.m88ksim
Benchmark
Improvement
over
Base
L2 static 32-byte block/fetch size
L2 static 64-byte block/fetch size
L2 static 128-byte block/fetch size
L2 static 256-byte block/fetch size
varying fetch (1-bit sctr)
varying fetch (4-bit sctr)
(b) L2 Trends (with L1 varying fetches)
Figure
7: Performance for various statically-determined block/fetch sizes and for our spatial locality optimizations using both
1 and 4-bit sctrs.
10.00%
20.00%
30.00%
40.00%
50.00%
70.00%
80.00%
90.00%
026.compress
(base) (opti)
(base) (opti)
(base) (opti)
147.vortex
(base) (opti)
Pcode
(base) (opti)
lmdes2_customizer
(base) (opti)
(base) (opti)
(base) (opti)
134.perl
(base) (opti)
(base) (opti)
Benchmark
%total
base
execution
cycles
cold start stall cycles
L2 capacity miss stall cycles
conflict miss stall cycles
cold start stall cycles
L1 capacity miss stall cycles
conflict miss stall cycles
Figure
8: Stall cycle breakdown for base and the spatial
locality optimizations.
reductions in capacity and cold start stall cycles that our
optimizations achieve should remain. To investigate these
effects, the data cache configuration discussed in Section 5.1
was modified to have a 2-way set-associative L1 data cache
and a 4-way set-associative L2 data cache.
Figure
9 shows the new performance improvements for our
optimizations. The left bars show the result of applying our
optimizations to the L1 data cache only, and the right bars
show the result of applying our techniques to both the L1
and L2 data caches, using four-bit sctrs. The improvements
have reduced significantly for some benchmarks over those
shown in Figure 7. However, large improvements are still
achieved for some benchmarks, particularly when applying
the optimizations at the L2 data cache level, due to the
reductions we achieve in L2 cold start stall cycles for data
with spatial locality.
-2.00%
2.00%
4.00%
6.00%
8.00%
10.00%
12.00%
14.00%
16.00%
20.00%
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_custom izer
Benchmark
Improvement
over
Base
varying fetch (4-bit sctr)
varying fetch (4-bit sctrs)
Figure
9: Performance for the spatial locality optimizations
with 2-way and 4-way set-associative L1 and L2 data caches,
respectively.
5.3.3 Growing Memory Latency Effects
As discussed in Section 1, memory latencies are increasing,
and this trend is expected to continue. Figure 10 shows
the improvements achieved by our optimizations when applied
to direct-mapped caches for both 100 and 200-cycle la-
tencies, each relative to a base configuration with the same
memory latency. Most of the benchmarks see much larger
improvements from our optimizations, with the exception of
026.compress. Because 026.compress has very little spatial
locality to exploit, the longer latency cannot be hidden as
effectively. Although the raw number of cycles we eliminate
grows, as a percentage of the associated base execution cycle
count it becomes smaller.
5.3.4 Comparison of Integrated Techniques to Doubled
Data Caches
As the memory latencies increase, intelligent cache management
techniques will become increasingly important. We examined
the performance improvement achieved by integrat-
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_custom izer
Benchmark
Improvement
over
Base
100-cycle latency
200-cycle latency
Figure
10: Performance for the spatial locality optimizations
with growing memory latencies.
5.00%
10.00%
15.00%
20.00%
30.00%
026.compress 072.sc 099.go 147.vortex Pcode
lmdes2_custom izer
Benchmark
Improvement
over
Base
doubled (32K/512K) L1/L2
varying fetch (4-bit sctr) &
bypass (infinite MAT)
varying fetch (4-bit sctr) &
bypass (1K-entry MAT)
varying fetch (4-bit sctr) &
bypass (512-entry MAT)
Figure
11: Comparison of doubled caches to integrated spatial
locality and bypassing optimizations. Infinite, 1024-
entry, and 512-entry direct-mapped MATs are examined.
ing our spatial locality optimizations with intelligent bypass-
ing, using 8-bit access counters in each MAT entry [3]. The
4-way set-associative buffers used to hold the bypassed data
at the L1 and L2 caches contain 128 8-byte entries and 512
32-byte entries, respectively. Then, the SLDT and MAT at
each cache level are used to detect spatial locality and control
the fetch sizes for both the data cache and the bypass
buffer at that level.
Figure
11 shows the improvements achieved by combining
these techniques at both cache levels for a 100-cycle memory
latency. We show results for three direct-mapped MAT
sizes: infinite, 1K-entry, and 512-entry. Also shown are the
performance improvements achieved by doubling both the
L1 and L2 data caches. Doubling the caches is a brute-force
technique used to improve cache performance. Figure 11
shows that performing our integrated optimizations at both
cache levels can outperform simply doubling both levels of
cache. The only case where the doubled caches perform significantly
better than our optimizations is for 026.compress.
This improvement mostly comes from doubling the L2 data
cache, which results because its hash tables can fit into a
Data Block Tag Tag
Cache Cost Size Size Cost
Level (bytes) (bytes) Sets (bits) (bytes)
Table
4: Hardware Cost of Doubled Data Caches.
512K-byte cache. Pcode is the only benchmark for which
the performance degrades significantly when reducing the
MAT size, however, 1K-entry MATs can still outperform the
doubled caches. Comparing Figure 11 to the bypassing improvements
in [3] shows that often significant improvements
can be achieved by intelligently controlling the fetch sizes
into the data caches and bypass buffers.
6 Design Considerations
In this section we examine the hardware cost of the spatial
locality optimization scheme described in Section 4, and
compare this to the cost of doubling the data caches at each
level. As discussed in Section 4.1, the cost of the MAT hardware
is amortized when performing both spatial locality and
bypassing optimizations. For this reason, we compute the
hardware cost of the hardware support for both of these opti-
mizations, just as their combined performance was compared
to the performance of doubling the caches in Section 5.3.4.
The additional hardware cost incurred by our spatial locality
optimization scheme is small compared to doubling
the cache sizes at each level, particularly for the L2 cache.
For the 16K-byte direct-mapped L1 cache used to generate
the results of Section 5.3, bits of tag are used per entry
(assuming 32-bit addresses). Doubling this cache will result
in 17-bit tags. Because the line size is 32 bytes, the total additional
cost of the increased tag array will be 17\Lambda2
which is 1K bytes 1 In addition, an extra 16K of data is
needed. Similar computations will show that the cost of
doubling the 256K-byte L2 cache is an extra 6144 bytes of
tag and 256K bytes of data. The total tag and data costs of
the doubled L1 and L2 caches is shown in Table 4.
For a direct-mapped MAT with 8-bit access counters and
4-bit spatial counters, Table 12 gives the hardware cost of the
data and tags for the MAT sizes discussed in Section 5.3.4.
Since all addresses within a macroblock map to the same
MAT counter, a number of lower address bits are discarded
when accessing the MAT. The size of the resulting MAT
address for 1K-byte macroblocks is shown in column 3 of
Table
12(a).
Table
12(b) shows the data and tag array costs for the
direct-mapped data caches in our spatial locality optimization
scheme. The data cost remains the same as the base
configuration cost, but the tag array cost is increased due
to the decreased line sizes and additional support for our
scheme, which requires a 1-bit fetch initiator bit per tag entry
The cost for the L1 buffer, which is a 4-way set-associative
cache with 8-byte lines is shown in Table 12(c). As with the
optimized data caches, the bypass buffers require a 1-bit
fetch initiator bit, in addition to the address tag. The cost
for the L2 bypass buffer is computed similarly in Table 12(c).
1 We are ignoring the valid bit and other state.
MAT Data Cost Size of MAT Tag Size Tag Cost
Entries (bytes) Address (bits) (bits) (bytes)
(a) Hardware Cost of 512 and 1K entry MATs. Cost is same for both L1 and L2 cache levels.
Data Fetch Block Tag Tag
Cache Cost Size Size Size Cost
Level (bytes) (bytes) (bytes) Sets (bits) (bytes)
(b) Hardware Cost of Optimized Data Caches.
Block Data Tag Tag
Cache Fetch Size Cost Size Cost
Level Entries Size (bytes) (bytes) (bytes) (bits) (bytes)
(c) Hardware Cost of Bypass Buffers.
Cache SLDT Tag Size Tag Cost
Level Entries (bits) (bytes)
(d) Hardware Cost of SLDTs.
Figure
12: Hardware Cost Breakdown of Spatial Locality Optimizations.
The final component of the spatial locality optimization
scheme is the 32-entry SLDT, which can be organized as a
direct-mapped tag array, with the vc, 1-bit sz and 1-bit sr
fields included in each tag entry. The L1 SLDT requires a 2-
bit vc because there are 4 8-byte lines per 32-byte maximum
fetch, and the L2 SLDT requires a 3-bit vc due to the 8 32-
byte lines per 256-byte maximum fetch. A bit mask could be
used to implement the vc, rather than the counter design,
to reduce the operational complexity. However, for large
maximum to minimum fetch size ratios, such as the 8-to-1
ratio for the L2 cache, a bit mask will result in larger entries.
Table
12(d) shows the total tag array costs of the L1 and L2
SLDTs.
Finally, combining the costs of the MAT, the optimized
data cache, the bypass buffer, and the SLDT results in a
total L1 cost of 24376 bytes with a 512-entry MAT, and
25848 bytes with a 1K-entry MAT. Therefore, the savings
over doubling the L1 data cache is over 10K and 8K bytes
for the 512 and 1K-entry MATs, respectively. Similar calculations
show that our L2 optimizations save over 247K
bytes and 245K bytes for the 512 and 1K-entry MATs, re-
spectively, over doubling the L2 data cache. This translates
into 26% and 44% less tags and data than doubling the data
caches at the L1 and L2 levels, respectively, for the larger
1K-entry MAT. Comparing the performance of the spatial
locality and bypassing optimizations to the performance obtained
by doubling the data caches at both levels, as shown
in
Figure
11, illustrates that for much smaller hardware costs
our optimizations usually outperform simply doubling the
caches.
To reduce the hardware cost, we could potentially integrate
the L1 MAT with the TLB and page tables. For a
macroblock size larger than or equal to the page size, each
TLB entry will need to hold only one 8-bit counter value.
For a macroblock size less than the page size, each TLB
entry needs to hold several counters, one for each of the
macroblocks within the corresponding page. In this case
a small amount of additional hardware is necessary to select
between the counter values. However, further study is
needed to determine the full effects of TLB integration.
7 Conclusion
In this paper, we examined the spatial locality characteristics
of integer applications. We showed that the spatial
locality varied not only between programs, but also varied
vastly between data accessed by the same application. As a
result of varying spatial locality within and across applica-
tions, spatial locality optimizations must be able to detect
and adapt to the varying amount of spatial locality both
within and across applications in order to be effective. We
presented a scheme which meets these objectives by detecting
the amount of spatial locality in different portions of
memory, and making dynamic decisions on the appropriate
number of blocks to fetch on a memory access. A Spatial Locality
Detection Table (SLDT), introduced in this paper, facilitates
spatial locality detection for data while it is cached.
This information is later recorded in a Memory Address Table
(MAT) for long-term tracking, and is then used to tune
the fetch sizes for each missing access.
Detailed simulations of several applications showed that
significant speedups can be achieved by our techniques. The
improvements are due to the reduction of conflict and capacity
misses by utilizing small blocks and small fetch sizes
when spatial locality is absent, and utilizing the prefetching
effect of large fetch sizes when spatial locality exists. In ad-
dition, we showed that the speedups achieved by this scheme
increase as the memory latency increases.
As memory latencies increase, the importance of cache
performance improvements at each level of the memory hierarchy
will continue to grow. Also, as the available chip
area grows, it makes sense to spend more resources to allow
intelligent control over the cache management, in order to
adapt the caching decisions to the dynamic accessing behav-
ior. We believe that our schemes can be extended into a
more general framework for intelligent runtime management
of the cache hierarchy.
Acknowledgements
The authors would like to thank Mark Hill, Santosh Abraham
and Wen-Hann Wang, as well as all the members of
the IMPACT research group, for their comments and suggestions
which helped improve the quality of this research.
This research has been supported by the National Science
Foundation (NSF) under grant CCR-9629948, Intel Corpo-
ration, Advanced Micro Devices Hewlett-Packard, SUN Mi-
crosystems, NCR, and the National Aeronautics and Space
Administration (NASA) under Contract NASA NAG 1-613
in cooperation with the Illinois Computer laboratory for
Aerospace Systems and Software (ICLASS).
--R
"Run-time spatial locality detection and optimization,"
"Predicting and precluding problems with memory latency,"
"Run-time adaptive cache hierarchy management via reference analysis,"
"The performance impact of block sizes and fetch strategies,"
"Line (block) size choice for cpu cache memo- ries,"
"Fixed and adaptive sequential prefetching in shared memory multipro- cessors,"
"Cache memories,"
"Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers,"
"An effective on-chip preloading scheme to reduce data access penalty,"
"Quantifying the performance potential of a data prefetch mechanism for pointer-intensive and numeric programs,"
"Stride directed prefetching in scalar processors,"
Software Methods for Improvement of Cache Performance on Supercomputer Applications.
"Design and evaluation of a compiler algorithm for prefetching,"
"Data access microarchitectures for superscalar processors with compiler-assisted data prefetching,"
"Compiler-based prefetching for recursive data structures,"
"SPAID: Software prefetching in pointer- and call- intensive environments,"
"A data cache with multiple caching strategies tuned to different types of local- ity,"
"The split temporal/spatial cache: Initial performance analysis,"
"A quantitative analysis of loop nest locality,"
"Efficient simulation of caches under optimal replacement with applications to miss characterization,"
"A modified approach to data cache management,"
"Reducing conflicts in direct-mapped caches with a temporality-based design,"
"Data prefetching in multi-processor vector cache memories,"
"IMPACT: An architectural framework for multiple-instruction-issue processors,"
"How to simulate 100 billion references cheaply,"
--TR
Line (block) size choice for CPU cache memories
Data prefetching in multiprocessor vector cache memories
IMPACT
Data access microarchitectures for superscalar processors with compiler-assisted data prefetching
An effective on-chip preloading scheme to reduce data access penalty
Design and evaluation of a compiler algorithm for prefetching
directed prefetching in scalar processors
A data cache with multiple caching strategies tuned to different types of locality
A modified approach to data cache management
Compiler-based prefetching for recursive data structures
Run-time adaptive cache hierarchy management via reference analysis
Cache Memories
Predicting and Precluding Problems with Memory Latency
Software methods for improvement of cache performance on supercomputer applications
--CTR
Guest Editors' Introduction-Cache Memory and Related Problems: Enhancing and Exploiting the Locality, IEEE Transactions on Computers, v.48 n.2, p.97-99, February 1999
Afrin Naz , Mehran Rezaei , Krishna Kavi , Philip Sweany, Improving data cache performance with integrated use of split caches, victim cache and stream buffers, ACM SIGARCH Computer Architecture News, v.33 n.3, June 2005
Jike Cui , Mansur. H. Samadzadeh, A new hybrid approach to exploit localities: LRFU with adaptive prefetching, ACM SIGMETRICS Performance Evaluation Review, v.31 n.3, p.37-43, December
Sanjeev Kumar , Christopher Wilkerson, Exploiting spatial locality in data caches using spatial footprints, ACM SIGARCH Computer Architecture News, v.26 n.3, p.357-368, June 1998
Jie Tao , Wolfgang Karl, Detailed cache simulation for detecting bottleneck, miss reason and optimization potentialities, Proceedings of the 1st international conference on Performance evaluation methodolgies and tools, October 11-13, 2006, Pisa, Italy
Srikanth T. Srinivasan , Roy Dz-ching Ju , Alvin R. Lebeck , Chris Wilkerson, Locality vs. criticality, ACM SIGARCH Computer Architecture News, v.29 n.2, p.132-143, May 2001
Gokhan Memik , Mahmut Kandemir , Alok Choudhary , Ismail Kadayif, An Integrated Approach for Improving Cache Behavior, Proceedings of the conference on Design, Automation and Test in Europe, p.10796, March 03-07,
McCorkle, Programmable bus/memory controllers in modern computer architecture, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Neungsoo Park , Bo Hong , Viktor K. Prasanna, Tiling, Block Data Layout, and Memory Hierarchy Performance, IEEE Transactions on Parallel and Distributed Systems, v.14 n.7, p.640-654, July
Jaeheon Jeong , Per Stenstrm , Michel Dubois, Simple penalty-sensitive replacement policies for caches, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Hur , Calvin Lin, Memory Prefetching Using Adaptive Stream Detection, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.397-408, December 09-13, 2006
Prateek Pujara , Aneesh Aggarwal, Increasing cache capacity through word filtering, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Hantak Kwak , Ben Lee , Ali R. Hurson , Suk-Han Yoon , Woo-Jong Hahn, Effects of Multithreading on Cache Performance, IEEE Transactions on Computers, v.48 n.2, p.176-184, February 1999
Ben Juurlink , Pepijn de Langen, Dynamic techniques to reduce memory traffic in embedded systems, Proceedings of the 1st conference on Computing frontiers, April 14-16, 2004, Ischia, Italy
Tony Givargis, Improved indexing for cache miss reduction in embedded systems, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Mirko Loghi , Paolo Azzoni , Massimo Poncino, Tag Overflow Buffering: An Energy-Efficient Cache Architecture, Proceedings of the conference on Design, Automation and Test in Europe, p.520-525, March 07-11, 2005
Timothy Sherwood , Brad Calder , Joel Emer, Reducing cache misses using hardware and software page placement, Proceedings of the 13th international conference on Supercomputing, p.155-164, June 20-25, 1999, Rhodes, Greece
Soontae Kim , N. Vijaykrishnan , Mahmut Kandemir , Anand Sivasubramaniam , Mary Jane Irwin, Partitioned instruction cache architecture for energy efficiency, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.2, p.163-185, May
Razvan Cheveresan , Matt Ramsay , Chris Feucht , Ilya Sharapov, Characteristics of workloads used in high performance and technical computing, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Jonathan Weinberg , Michael O. McCracken , Erich Strohmaier , Allan Snavely, Quantifying Locality In The Memory Access Patterns of HPC Applications, Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.50, November 12-18, 2005 | data cache;prefetching;block size;cache management;spatial locality |
266808 | The design and performance of a conflict-avoiding cache. | High performance architectures depend heavily on efficient multi-level memory hierarchies to minimize the cost of accessing data. This dependence will increase with the expected increases in relative distance to main memory. There have been a number of published proposals for cache conflict-avoidance schemes. We investigate the design and performance of conflict-avoiding cache architectures based on polynomial modulus functions, which earlier research has shown to be highly effective at reducing conflict miss ratios. We examine a number of practical implementation issues and present experimental evidence to support the claim that pseudo-randomly indexed caches are both effective in performance terms and practical from an implementation viewpoint. | Introduction
On current projections the next 10 years could see CPU clock frequencies increase by a factor of
twenty whereas DRAM row-address-strobe delays are projected to decrease by only a factor of
two. This potential ten-fold increase in the distance to main memory has serious implications for
the design of future cache-based memory hierarchies as well as for the architecture of memory
devices themselves.
There are many options for an architect to consider in the battle against memory latency.
These can be partitioned into two broad categories - latency reduction and latency hiding. Latency
* Departament d'Arquitectura de Computadors
Universitat Polit-cnica de Catalunya
c/ Jordi Girona 1-3, 08034 Barcelona (Spain)
Email:{antonio, joseg}@ac.upc.es
Department of Computer Science
University of Edinburgh
JCMB, Kings Buildings, Edinburgh (UK)
reduction techniques rely on caches to exploit locality with the objective of reducing the latency of
each individual memory reference. Latency hiding techniques exploit parallelism to overlap
memory latency with other operations and thus "hide" it from a program's critical path.
This paper addresses the issue of latency reduction and the degree to which future cache
architectures can isolate their processor from increasing memory latency. We discuss the theory,
and evaluate the practice, of using a particular class of conflict-avoidance indexing functions. We
demonstrate how such a cache could be constructed and provide practical solutions to some
previously un-reported problems, as well as some known problems, associated with
unconventional indexing schemes.
In section 2 we present an overview of the causes of conflict misses and summarise
previous techniques that have been proposed to minimize their effect on performance. We propose
a method of cache indexing which has demonstrably lower miss ratios than alternative schemes,
and summarise the known characteristics of this method. In section 3 we discuss a number of
implementation issues, such as the effect of using this novel indexing scheme on the processor
cycle time. We present an experimental evaluation of the proposed indexing scheme in section 4.
Our results show how the IPC of an out-of-order superscalar processor can be improved through
the use of our proposed indexing scheme. Finally, in section 5, we draw conclusions from this
study.
2 The problem of cache conflicts
Whenever a block of main memory is brought into cache a decision must be made on which block,
or set of blocks, in the cache will be candidates for storing that data. This is referred to as the
placement policy. Conventional caches typically extract a field of bits from the address and use
this to select one block from a set of . Whilst simple, and easy to implement, this indexing
function is not robust. The principal weakness of this function is its susceptibility to repetitive
conflict misses. For example, if is the cache capacity and is the block size, then addresses
and map to the same cache set if .
Condition 1 Repetitive collisions
If and collide on the same cache block, then addresses and also collide
in cache, for any integer , except when . Where
and .
There are two common cases when this happens:
. when accessing a stream of addresses if collides with , then
there may be up to conflict misses in this stream.
. when accessing elements of two distinct arrays and , if collides with
then collides with .
-way set-associativity can help to alleviate such conflicts. However, if a working set
contains conflicts on some cache block, set associativity can only eliminate at most of
those conflicts. The following section proposes a remedy to the problem of cache conflicts by
defining an improved method of block placement.
2.1 Conflict-resistant cache placement functions
The objective of a conflict-resistant placement function is to avoid the repetitive conflicts defined
by Condition 1. This is analogous to finding a suitable hash function for a hash table. Perhaps the
most well-known alternative to conventional cache indexing is the skewed associative cache [21].
This involves two or more indexing functions derived by XORing two -bit fields from an address
to produce an -bit cache index. In the field of interleaved memories it is well known that bank
{ } A i A i k
conflicts can be reduced by using bank selection functions other than the simple modulo-power-
of-two. Lawrie and Vora proposed a scheme using prime-modulus functions [16], Harper and
Jump [11], and Sohi [24] proposed skewing functions. The use of XOR functions was proposed by
Frailong et al. [5], and pseudo-random functions were proposed by Raghavan & Hayes [17] and
Rau et al. [18], [19]. These schemes each yield a more or less uniform distribution of requests to
banks, with varying degrees of theoretical predictability and implementation cost. In principle each
of these schemes could be used to construct a conflict-resistant cache by using them as the indexing
function. However, when considering conflict resistance in cache architectures two factors are
critical. Firstly, the chosen placement function must have a logically simple implementation, and
secondly we would like to be able to guarantee good behavior on all regular address patterns - even
those that are pathological under a conventional placement function. In both respects the
irreducible polynomial modulus (I-Poly) permutation function proposed by Rau [19] is an ideal
candidate.
The I-Poly scheme effectively defines families of pseudo-random hash functions which are
implemented using exclusive-OR gates. They also have some useful behavioral characteristics
which we discuss later. In [10] the miss ratio of the I-Poly indexing scheme is evaluated
extensively in the context of cache indexing, and is compared with a number of different cache
organizations including; direct-mapped, set-associative, victim, hash-rehash, column-associative
and skewed-associative. The results of that study suggest that the I-Poly function is particularly
robust. For example, on Spec95 an 8Kb two-way associative cache has an average miss ratio of
13.84%. An I-poly cache of identical capacity and associativity reduces that miss ratio to 7.14%,
which compares well against a fully-associative cache which has a miss ratio of 6.80%.
2.1.1 Polynomial-modulus cache placement
To define the most general form of conflict resistant cache indexing scheme let the placement of a
block of data from an -bit address , in each of ways of a -way associative cache with
sets, be determined by the set of indices . In I-Poly indexing each is given by the
function , for . In this scheme is a member of a set of ,
possibly distinct, integer values , in the range . If we choose to
use distinct values for each the cache will be skewed, though skewing is not an obligatory
feature of this scheme. Each function is defined as follows. Consider the integers and
in terms of their binary representations. For example, ,
and similarly for . We consider to be a polynomial
defined over the field GF(2), and similarly
. For best performance will be an irreducible
polynomial, though it need not be so.
The value of , also defined over GF(2), is given by of order less
than such that
Effectively is the polynomial modulus function ignoring the
higher order terms of . Each bit of the index can be computed using an XOR tree, if
is constant, or an AND-XOR tree if one requires a configurable index function. For best
performance should be as close as possible to , though it may be as small as for this
scheme to be distinct from conventional block placement.
2.1.2 Polynomial placement characteristics
The class of polynomial hash functions described above have been studied previously in the
context of stride-insensitive interleaved memories (see [18] and [19]). These functions have certain
{
{
P A a n 1
- a n 2
- . a 0
A x
- a n 2
- . a 0
- . a 1 x 1 a 0 x 0
R x
provable characteristics which are of significant value in the context of cache indices. For example,
all strides of the form produce address sequences that are free from conflicts - i.e. they do not
Condition 1 set out in section 1. This is a fundamental result for polynomial indexing; if the
addresses of a -strided sequence are partitioned into -long sub-sequences, where is the
number of cache blocks, we can guarantee that there are no cache conflicts within each sub-
sequence. Any conflicts between sub-sequences are due to capacity problems and only be solved
by larger caches or tiling of the iteration space.
The stride-insensitivity of the I-Poly index function can be seen in figure 1 which shows
the behavior of four cache configurations, identical except in their indexing functions. All have
capacity, two-way associativity. They were each driven from an
address trace representing repeated accesses to a vector of 64 8-byte elements in which the
elements were separated by stride . With no conflicts such a sequence would use at most half of
the 128 sets in the cache. The experiment was repeated for all strides in the range to
determine how many strides exhibited bad behavior for each indexing function. The experiment
compares three different indexing schemes; conventional modulo power-of-2 (labelled a2), the
function proposed in [21] for the skewed-associative cache (a2-Hx-Sk) and two I-Poly
functions. The I-Poly scheme was simulated both with and without skewed index functions (a2-Hp
and a2-Hp-Sk respectively). It is apparent that the I-Poly scheme with skewing displays a
remarkable ability to tolerate all strides equally and without any pathological behavior.
For all schemes the majority of strides yield low miss ratios. However, both the
conventional and the skewed XOR functions display pathological behavior (miss ratio > 50%) on
more than 6% of all strides.
3 Implementation Issues
The logic of the polynomial modulus operation in GF(2) defines a class of hash functions which
compute the cache placement of an address by combining subsets of the address bits using XOR
gates. This means that, for example, bit 0 of the cache index may be computed as the exclusive-OR
of bits 0, 11, 14, and 19 of the original address. The choice of polynomial determines which
bits are included in each set. The implementation of such a function for a cache with an 8-bit index
would require just eight XOR gates with fan-in of 3 or 4.
Whilst this appears remarkably simple, there is more to consider than just the placement
function. Firstly, the function itself uses address bits beyond the normal limit imposed by typical
minimum page size restriction. Secondly, the use of pseudo-random placement in a multi-level
memory hierarchy has implications for the maintenance of Inclusion. Here we briefly examine
these two issues and show how the virtual-real two-level cache hierarchy proposed by Wang et al.
[25] provides a clean solution to both problems.
3.1 Overcoming page size restrictions
Typical operating systems permit pages to be as small as 8 or 16 Kbytes. In a conventional cache
this places a limit on the first-level cache size if address translation is to proceed in parallel with
tag lookup. Similarly, any novel cache indexing scheme which uses address bits beyond the
Figure
1. Frequency distribution of miss ratios for conventional and pseudo-random
indexing schemes. Columns represent I-Poly indexing and lines represent
conventional and skewed-associative indexing.
minimum page size boundary cannot use a virtually-indexed physically-tagged cache. There are
four alternative options to consider:
1. Perform address translation prior to tag lookup (i.e. physical indices)
2. Enable I-Poly indexing only when data pages are known to be large enough
3. Use a virtually-indexed virtually-tagged level-1 cache
4. Index conventionally, but use a polynomial rehash on a level-1 miss.
Option 1 is attractive if an existing processor pipeline performs address translation at least one
stage prior to tag lookup. This might be the case in a processor which is able to hide memory
latency through dynamic execution or multi-threading, for example. However, in many systems,
performing address translation prior to tag lookup will either extend the critical path through a
critical pipeline stage or introduce an extra cycle of untolerated latency via an additional pipeline
stage.
Option 2 could be attractive in high performance systems where large data sets and large
physical memories are the norm. In such circumstances processes may typically have data pages
of 256Kbytes or more. The O/S should be able to track the page sizes of segments currently in use
by a process (and its kernel) and enable polynomial cache indexing at the first-level cache if all
segments' page sizes are above a certain threshold. This will provide more unmapped bits to the
hash function when possible, but revert to conventional indexing when this is not possible.
For example, if the threshold is 256Kbytes and the cache is 8Kbytes two-way associative,
then we may implement a polynomial function hashing 13 unmapped physical address bits to 7
cache index bits. This will be sufficient to produce good conflict-free behavior. Provided the level-
1 cache is flushed when the indexing function is changed, there is no reason why the indexing
function needs to remain constant.
The third option is not currently popular, primarily because of potential difficulties with
aliases in the virtual address space as well as the difficulty of shooting down a level-1 virtual cache
line when a physically-addressed invalidation operation is received from another processor. The
two-level virtual-real cache hierarchy proposed by Wang et al. in [25] provides a way of
implementing a virtually-tagged L1 cache, thus exposing more address bits to the indexing
function without incurring address translation delays.
The fourth option would be appropriate for a physically-tagged direct-mapped cache. It is
similar in principle to the hash-rehash [1] and the column-associative caches [2]. The idea is to
make an initial probe with a conventional integer-modulus indexing function, using only
unmapped address bits. If this probe does not hit we probe again, but at a different index. By the
time the second probe begins, the full physical address is available and can be used in a polynomial
hashing function to compute the index of the second probe.
Addresses which can be co-resident under a conventional index function will not collide on
the first probe. Conversely, sets of addresses which do collide under a conventional indexing
function collide under the second probe with negligible probability , due to the pseudo-random
distribution of the polynomial hashing function. This provides a kind of pseudo-full associativity
in what is effectively a direct-mapped cache. The hit time of such a cache on the first probe would
be as good as any direct-mapped physically-indexed cache. However, the average hit time is
lengthened slightly due the occasional need for a second probe. We have investigated this style of
cache and devised a scheme for swapping cache lines between their "conventional" modulo-
indexed location and their "alternative" polynomially-indexed location. This leads to a typical
probability of around 90% that a hit is detected at the first probe. However, the slight increase in
average hit time due to occasional double probes means that a column-associative cache is only
attractive when miss penalties are comparatively large. Space restrictions prevent further coverage
of this option.
3.2 Requirements for Inclusion
Coherent cache architectures normally require that the property of Inclusion is maintained between
all levels of the memory hierarchy. Thus, if represents the set of data present in cache at level
, the property of Inclusion demands that for in an -level memory
hierarchy. Whenever this property is maintained a snooping bus protocol need only compare
addresses of global write operations with the tags of the lowest level of private cache.
A line at index in the L2 cache is replaced when a line at index in the L1 cache is
replaced with data at address if is not already present in L2. If line contains valid data we
must be sure that after replacement its data is not still present in L1. In a conventionally-indexed
cache this is not an issue because it is relatively easy to guarantee that the data at L2 index is
always located at L1 index , thus ensuring that L1 replacement will automatically preserve
Inclusion. In a pseudo-randomly indexed cache there is in general no way to make this guarantee.
Instead, the cache replacement protocols must explicitly enforce Inclusion by invalidating data at
L1 when required. This is guaranteed by the two-level virtual-real cache, but leads to the creation
of holes at the upper level of the cache, in turn leading to the possibility of additional cache misses.
3.3 Performance implication of holes
In a two-level virtual-real cache hierarchy there are three causes of holes at L1; these are:
1. Replacements at L2
2. Removal of virtual aliases at L1
3. Invalidations due to external coherency actions
It is probable that the frequency of item 2 occurring will be low; for this kind of hole to
cause a performance problem a process must issue interleaved accesses to two segments at distinct
virtual addresses which map to the same physical address. We preserve a consistent copy of the
data at these virtual addresses by ensuring that at most one such alias may be present in L1 at any
instant. This does not prevent the physical copy from residing undisturbed at L2; it simply
increases the traffic between L1 and L2 when accesses to virtual aliases are interleaved.
Invalidations from external coherency actions occur regardless of the cache architecture so
A A i 2
we do not consider them further in this analysis. The events that are of primary importance are
invalidations at L1 due to the maintenance of Inclusion between L1 and L2. It is important to
quantify their frequency and the effect they have on hit ratio at L1.
Recall that the index function at L2 is based on a physical address whereas the index
function at L1 uses a virtual address. Also, the number of bits included in the index function and
the function itself will be different in both cases. As these functions are pseudo-random there will
be no correlation between the indices at L1 and L2 for each particular datum. Consequently, when
a line is replaced at L2 the data being replaced will also exist in L1 with probability
(1)
where and are the number of bits in the indices at L1 and L2 respectively.
If the data being replaced at L2 does exist in L1, it is possible that the L1 index is
coincidentally equal to the index of the data being brought into L1 (as the L2 replacement is
actually caused by an L1 replacement). If this occurs a hole will not be created after all. Thus the
probability that the elimination of a line at L1 to preserve inclusion will result in a hole is given by
(2)
The net probability that a miss at L2 will cause a hole to appear at L1 is , given by the product
of and , thus:
When the size ratio between L1 and L2 is large the value of is small. For example, an 8KB L1
cache and a 256KB L2 cache with lines yield . Slightly more than 3% of L2
will result in the creation of a hole.
The expected increase in compulsory miss ratio at L1 can be modelled by the product of
and the L2 miss ratio. When compared with simulated miss ratios we find that this
approximation is accurate for L2:L1 cache size ratios of 16 or above. For instance simulations of
the whole Spec95 suite with an 8Kb two-way skewed I-Poly L1 cache backed by a 1 Mb
conventionally-indexed two-way set-associative L2 cache showed that the effect of holes on L1
miss ratio is negligible. The percentage of L2 misses that created a hole averaged less than 0.1%
and was never greater than 1.2% for any program.
The two-level virtual-real cache described in [25] implements a protocol between the L1
and L2 cache which effectively provides a mechanism for ensuring that inclusion is maintained,
that coherence can be maintained without reverse address translation, and in our case that holes can
be created at level-1 when required by the inclusion property.
3.4 Effect of polynomial mapping on critical path
A cache memory access in a conventional organization normally computes its effective address by
adding two registers or a register plus a displacement. I-poly indexing implies additional circuitry
to compute the index from the effective address. This circuitry consists of several XOR gates that
operate in parallel and therefore the total delay is just the delay of one gate. Each XOR gate has a
number of inputs that depend on the particular polynomial being used. For the experiments
reported in this paper the number of inputs is never higher than 5. Therefore, the delay due to the
gates will be low compared with the delay of a complete pipeline stage.
Depending on the particular design, it may happen that this additional delay can be hidden.
For instance, if the memory access does not begin until the complete effective address has been
computed, the XOR delay can be hidden since the address is computed from right to left and the
gates use only the least-significant bits of the address (19 in the experiments reported in this
paper). Notice that this is true even for carry look-ahead adders (CLA). A CLA with look-ahead
blocks of size b bits computes first the b least-significant bits, which are available after a delay of
approximately one look-ahead block. After a three-block delay the b 2 least-significant bits are
available. In general, the b i least-significant bits have a delay of approximately 2i-1 blocks. For
instance, for 64-bit addresses and a binary CLA, the 19 bits required by the I-poly functions used
in the experiments of this paper have a delay of about 9 blocks whereas the whole address
computation requires 11 block-delays. Once the 19 least-significant bits have been computed, it is
reasonable to assume that the XOR gate delay is shorter than the time required to compute the
remaining bits.
However, since the cache access time usually determines the pipeline cycle, the fact that
the least-significant bits are available early is sometimes exploited by designers in order to shorten
the latency of memory instructions by overlapping part of the cache access (which requires only
the least-significant bits) with the computation of the most significant address bits. This approach
results in a pipeline with a structure similar to that shown in figure 2. Notice that this organization
requires a pipelined memory (in the example we have assumed a two-stage pipelined memory). In
this case, the polynomial mapping may cause some additional delay to the critical path. We will
show later that even if the additional delay induces a one cycle penalty in the cache access time,
the polynomial mapping provides a significant overall performance improvement. An additional
delay in a load instruction may have a negative impact on the performance of the processor because
the issue of dependent instructions may be delayed accordingly. On the other hand, this delay has
a negligible effect, if any, on store instructions since these instructions are issued to memory when
they are committed in order to have precise exceptions, and therefore the XOR functions can
usually be performed while the instruction is waiting in the store buffer. Besides, only load
instructions may depend on stores but these dependencies are resolved in current microprocessors
(e.g. PA8000 [12]) by forwarding. This technique compares the effective address of load and store
instructions in order to check a possible match but the cache index, which involves the use of the
XOR gates, is not required by this operation.
Memory address prediction can be also used to avoid the penalty introduced by the XOR
delay when it lengthens the critical path. The effective address of memory references has been
shown to be highly predictable. For instance, in [9] it has been shown that the address of about 75%
of the dynamically executed memory instructions of the Spec95 suite can be predicted with a
simple scheme based on a table that keeps track of the last address seen by a given instruction and
its last stride. We propose to use a similar scheme to predict early in the pipeline the line that is
likely to be accessed by a given load instruction. In particular, the scheme works as follows.
The processor incorporates a table indexed by the instruction address. Each entry stores the
last address and the predicted stride for some recently executed load instruction. In the fetch stage,
Figure
2: A pipeline that overlaps part of the address computation with the memory access.
least-significant bits
most-significant bits
Memory access
Memory access
ALU
stage
stage
critical path
this table is accessed with the program counter. In the decode stage, the predicted address is
computed and the XOR functions are performed to compute the predicted cache line. Notice that
this can be done in just one cycle since the XOR can be performed in parallel with the computation
of the most-significant bits as discussed above, and the time to perform an integer addition is not
higher than one cycle in the vast majority of processors. When the instruction is subsequently
issued to the memory unit it uses the predicted line number to access the cache in parallel with the
actual address and line computation. If the predicted line turns out to be incorrect, the cache access
is repeated again with the actual address. Otherwise, the data provided by the speculative access
can be loaded into the destination register.
The scheme to predict the effective address early in the pipeline has been previously used
for other purposes. In [7], a Load Target Buffer is presented, which predicts effective address
adding a stride to the previous address. In [3] and [4] a Fast Address Calculation is performed by
computing load addresses early in the pipeline without using history information. In those
proposals the memory access is overlapped with the non-speculative effective address calculation
in order to reduce the cache access time, though none of them execute speculatively the subsequent
instructions that depend on the predicted load. Other authors have proposed the use of a memory
address prediction scheme in order to execute memory instructions speculatively, as well as the
instructions dependent on them. In the case of a miss-speculation, a recovery mechanism similar
to that used by branch prediction schemes is utilized to squash the miss-speculated instructions.
The most noteworthy papers on this topic are [8], [9] and [20].
4 Experimental Evaluation
In order to verify the impact of polynomial mapping on a realistic microprocessor architecture we
have developed a parametric simulator of an out-of-order execution processor. A four-way
superscalar processor has been simulated. Table 1 shows the different functional units and their
latency considered for this experiment. The size of the reorder buffer is entries. There are two
separate physical register files (FP and Integer), each one having 64 physical registers. The
processor has a lockup-free data cache [14] that allows 8 outstanding misses to different cache
lines. The cache size is either 8Kb or 16 Kb and is 2-way set-associative with 32-byte line size. The
cache is write-through and no-write-allocate. The hit time of the cache is two cycles and the miss
penalty is 20 cycles. An infinite L2 cache is assumed and a 64-bit data bus between L1 and L2 is
considered (i.e., a line transaction occupies the bus during four cycles). There are two memory
ports and dependencies thorough memory are speculated using a mechanism similar to the ARB
of the Multiscalar [6] and PA8000 [12]. A branch history table with 2K entries and 2-bit saturating
counters is used for branch prediction.
The memory address prediction scheme has been implemented by means of a direct-mapped
table with 1K entries and without tags in order to reduce cost at the expense of more
interference in the table. Each entry contains the last effective address of the last load instruction
that used this entry and the last observed stride. In addition, each entry contains a 2-bit saturating
counter that assigns confidence to the prediction. Only when the most-significant bit of the counter
is set is the prediction considered to be correct. The address field is updated for each new reference
regardless of the prediction, whereas the stride field is only updated when the counter goes below
Functional Unit Latency Repeat rate
multiply
Effective Address 1 1
Table
1: Functional units and instruction latency.
Table
2 shows the IPC (instructions committed per cycle) and the miss ratio for different
configurations. The baseline configuration is an 8 Kb cache with I-poly indexing and no address
prediction (4th column). The average IPC of this configuration is 1.30 and the average miss ratio
(6th column) is 16.53 1 . When I-poly indexing is used the average miss ratio goes down to 9.68 (8th
column). If the XOR gates are not in the critical path this implies an increase in the IPC up to 1.35
(7th column). On the other hand, if the XOR gates are in the critical path and we assume a one cycle
penalty in the cache access time, the resulting IPC is 1.32 (9th column). However, the use of the
memory address prediction scheme when the XOR gates are in the critical path (10th column)
provides about the same performance as a cache with the XOR gates not in the critical path (7th
column). Thus, the main conclusion of this study is that the memory address prediction scheme can
offset the penalty introduced by the additional delay of the XOR gates when they are in the critical
path. Finally, table 2 also shows the performance of a 16 Kb 2-way set-associative cache (2nd and
3rd columns). Notice that the addition of I-poly indexing to an 8Kb cache yields over 60% of the
IPC increase that can be obtained by doubling the cache size.
The main benefit of polynomial mapping is to reduce the conflict misses. However, in the
benchmark suite there are many benchmarks that exhibit a relatively low conflict miss
ratio. In fact the Spec95 conflict miss ratio of a 2-way associative cache is less than 4% for all
programs except tomcatv, swim and wave5. If we focus on those three benchmarks with the highest
conflict miss ratios we can observe the ability of polynomial mapping to reduce the miss ratio and
significantly boost the performance of these problem cases. This is shown in table 3.
In this case we can see that the polynomial mapping provides a significant improvement in
performance even if the XOR gates are in the critical path and the memory address prediction
scheme is not used (27% increase in IPC). When memory address prediction is used the IPC is 33%
higher than that of a conventional cache of the same capacity and 16% higher than that of a
conventional cache with twice the capacity. Notice that the polynomial mapping scheme with
1. For each benchmark we considered 100M of instructions after skipping the first 2000M.
prediction is even better than the organization with the XOR gates not in the critical path but
without prediction. This is due to the fact that the memory address prediction scheme reduces by
one cycle the effective cache hit time when the predictions are correct, since the address
computation is overlapped with the cache access (the computed address is used to verify that the
prediction was correct). However, the main benefits observed in table 3 come from the reduction
in conflict misses. To isolate the different effects we have also simulated an organization with the
Conventional indexing I-poly indexing
Xor no CP
Xor in CP
IPC
miss
no
pred.
with
pred.
IPC miss no
pred
with
pred IPC miss IPC IPC
go 1.00 5.45 0.87 0.88 10.87 0.87 10.60 0.83 0.84
compress 1.13 12.96 1.12 1.13 13.63 1.11 14.17 1.07 1.10
li 1.40 4.72 1.30 1.32 8.01 1.33 7.10 1.26 1.31
ijpeg 1.31 0.94 1.28 1.28 3.72 1.29 2.17 1.28 1.30
perl 1.45 4.52 1.26 1.27 9.47 1.24 10.26 1.19 1.21
vortex 1.39 4.97 1.27 1.28 8.37 1.30 7.87 1.25 1.27
su2cor 1.28 13.74 1.24 1.26 14.69 1.24 14.66 1.21 1.25
hydro2d 1.14 15.40 1.13 1.15 17.23 1.13 17.22 1.11 1.15
applu 1.63 5.54 1.61 1.63 6.16 1.57 6.84 1.55 1.59
mgrid 1.51 4.91 1.50 1.53 5.05 1.50 5.31 1.46 1.52
turb3d 1.85 4.67 1.80 1.82 6.05 1.81 5.38 1.78 1.82
apsi 1.13 10.03 1.08 1.09 15.19 1.08 13.36 1.07 1.09
fpppp 2.14 1.09 2.00 2.00 2.66 1.98 2.47 1.93 1.94
wave5 1.37 27.72 1.26 1.28 42.76 1.51 14.67 1.48 1.54
Average 1.38 10.47 1.30 1.31 16.53 1.35 9.68 1.32 1.35
Table
2:IPC and load miss ratio for different cache configurations.
memory address prediction scheme and conventional indexing for an 8Kb cache (column 5). If we
compare this IPC with that in column 4 of table 3, we see that the benefits of the memory address
prediction scheme due to the reduction of the hit time are almost negligible. This confirms that the
improvement observed in the I-poly indexing scheme with address prediction derives from the
reduction in conflict misses.
Conclusions
In this paper we have described a pseudo-random indexing scheme which is robust enough to
eliminate repetitive cache conflicts. We have discussed the main implementation issues that arise
from the use of novel indexing schemes. For example, I-poly indexing uses more address bits than
a conventional cache to compute the cache index. Also, the use of different indexing functions at
L1 and L2 results in the occasional creation of a hole at L1. Both of these problems can be solved
using a two-level virtual-real cache hierarchy. Finally, we have proposed a memory address
prediction scheme to avoid the penalty due to the potential delay in the critial path introduced by
the I-poly indexing function.
Detailed simulations of an o-o-o superscalar processor have demonstrated that programs
with significant numbers of conflict misses in a conventional 8Kb 2-way set-associative cache
Conventional indexing Polynomial mapping
Xor no CP
Xor in CP
IPC
miss
no
pred.
with
pred.
IPC miss no
pred.
with
pred. IPC miss IPC IPC
wave5 1.37 27.72 1.26 1.28 42.76 1.51 14.67 1.48 1.54
Average 1.28 30.80 1.12 1.13 54.61 1.46 14.40 1.42 1.49
Table
3: IPC and load miss ratio for different cache configurations for selected bad programs.
perceive IPC improvements of 33% (with address prediction) or 27% (without address prediction).
This is up to 16% higher than the IPC improvements obtained simply by doubling the cache
capacity. Furthermore, programs which do not experience significant conflict misses see a less than
1% reduction in IPC when I-poly indexing is used in conjunction with address prediction.
An interesting by-product of I-poly indexing is an increase in the predictability of cache
behaviour. In our experiments we see that I-poly indexing reduces the standard deviation of miss
ratios across Spec95 from 18.49 to 5.16. If conflict misses are eliminated, the miss ratio depends
solely on compulsory and capacity misses, which in general are easier to predict and control.
Systems which incorporate an I-poly cache could be useful in the real-time domain, or in cache-based
scientific computing where iteration-space tiling often introduces extra cache conflicts.
6
--R
"Cache Performance of Operating Systems and Multiprogramming"
"Column-Associative Caches: A Technique for Reducing the Miss Rate of Direct-Mapped Caches"
"Streamling Data Cache Access with Fast Address Calculation"
"Zero-Cycle Loads: Microarchitecture Support for Reducing Load Latency"
"XOR-Schemes: A Flexible Data Organization in Parallel Memories"
"ARB: A Hardware Mechanims for Dynamic Reordering of Memory References"
"Hardware Support fot Hiding Cache Latency"
"Memory Address Prediction for Data Speculation"
"Speculative Execution via Address Prediction and Data Prefetching"
"Eliminating Cache Conflict Misses Through XOR-based Placement Functions"
"Vector Access Performance in Parallel Memories Using a Skewed Storage Scheme"
"Advanced Performance Features of the 64-bit PA-8000"
"Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers"
"Lockup-free instruction fetch/prefetch cache organization"
"The Cache Performance and Optimization of Blocked Algorithms"
"The Prime Memory System for Array Access"
"On Randomly Interleaved Memories"
"The Cydra 5 Stride-Insensitive Memory System"
"Pseudo-Randomly Interleaved Memories"
"The Performance Potential of Data Dependence Speculation & Collapsing"
"A Case for Two-way Skewed-associative Caches"
"Skewed-associative Caches"
"Cache Memories"
"Logical Data Skewing Schemes for Interleaved Memories in Vector Processors"
"Organization and Performance of a Two-Level Virtual-Real Cache Hierarchy"
--TR
Vector access performance in parallel memories using skewed storage scheme
Cache performance of operating system and multiprogramming workloads
Organization and performance of a two-level virtual-real cache hierarchy
The cache performance and optimizations of blocked algorithms
On randomly interleaved memories
Pseudo-randomly interleaved memory
A case for two-way skewed-associative caches
Column-associative caches
Streamlining data cache access with fast address calculation
Zero-cycle loads
The performance potential of data dependence speculation MYAMPERSANDamp; collapsing
Eliminating cache conflict misses through XOR-based placement functions
Speculative execution via address prediction and data prefetching
Cache Memories
Skewed-associative Caches
Memory Address Prediction for Data Speculation
Advanced performance features of the 64-bit PA-8000
Lockup-free instruction fetch/prefetch cache organization
--CTR
Hans Vandierendonck , Koen De Bosschere, Highly accurate and efficient evaluation of randomising set index functions, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.13-15, p.429-452, May
Steve Carr , Soner nder, A case for a working-set-based memory hierarchy, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Mazen Kharbutli , Yan Solihin , Jaejin Lee, Eliminating Conflict Misses Using Prime Number-Based Cache Indexing, IEEE Transactions on Computers, v.54 n.5, p.573-586, May 2005
Nigel Topham , Antonio Gonzlez, Randomized Cache Placement for Eliminating Conflicts, IEEE Transactions on Computers, v.48 n.2, p.185-192, February 1999
Rui Min , Yiming Hu, Improving Performance of Large Physically Indexed Caches by Decoupling Memory Addresses from Cache Addresses, IEEE Transactions on Computers, v.50 n.11, p.1191-1201, November 2001
Jaume Abella , Antonio Gonzlez, Heterogeneous way-size cache, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia | multi-level memory hierarchies;cache architecture design;polynomial modulus functions;conflict-avoiding cache performance;high performance architectures;cache storage;data access cost minimization;main memory;conflict miss ratios |
266810 | A framework for balancing control flow and predication. | Predicated execution is a promising architectural feature for exploiting instruction-level parallelism in the presence of control flow. Compiling for predicated execution involves converting program control flow into conditional, or predicated, instructions. This process is known as if-conversion. In order to effectively apply if-conversion, one must address two major issues: what should be if-converted and when the if-conversion should be applied. A compiler's use of predication as a representation is most effective when large amounts of code are if-converted and if-conversion is performed early in the compilation procedure. On the other hand the final code generated for a processor with predicated execution requires a delicate balance between control flow and predication to achieve efficient execution. The appropriate balance is tightly coupled with scheduling decisions and detailed processor characteristics. This paper presents an effective compilation framework that allows the compiler to maximize the benefits of predication as a compiler representation while delaying the final balancing of control flow and predication to schedule time. | Introduction
The performance of modern processors is becoming highly
dependent on the ability to execute multiple instructions per cy-
cle. In order to realize their performance potential, these processors
demand that increasing levels of instruction-level parallelism
(ILP) be exposed in programs. One of the major challenges to increasing
the available ILP is overcoming the limitations imposed
by branch instructions.
ILP is limited by branches for several reasons. First, branches
impose control dependences which often sequentialize the execution
of surrounding instructions. Second, the uncertainty of
branch outcomes forces compiler and hardware schedulers to
make conservative decisions. Branch prediction along with speculative
execution is generally employed to overcome these limitations
[1][2]. However, branch misprediction takes away a significant
portion of the potential performance gain. Third, traditional
techniques only facilitate exploiting ILP along a single trajectory
of control. The ability to concurrently execute instructions
from multiple trajectories offers the potential to increase ILP by
large amounts. Finally, branches often interfere with or complicate
aggressive compiler transformations, such as optimization
and scheduling.
Predication is a model in which instruction execution conditions
are not solely determined by branches. This characteristic
allows predication to form the basis for many techniques which
deal with branches effectively in both the compilation and execution
of codes. It provides benefits in a compiler as a representation
and in ILP processors as an architectural feature.
The predicated representation is a compiler N-address program
representation in which each instruction is guarded by a
boolean source operand whose value determines whether the instruction
is executed or nullified. This guarding boolean source
operand is referred to as the predicate. The values of predicate
registers can be manipulated by a predefined set of predicate
defining instructions. The use of predicates to guard instruction
execution can reduce or even completely eliminate the need for
branch control dependences. When all instructions that are control
dependent on a branch are predicated using the same condition
as the branch, that branch can legally be removed. The
process of replacing branches with appropriate predicate computations
and guards is known as if-conversion [3][4].
The predicated representation provides an efficient and useful
model for compiler optimization and scheduling. Through
the removal of branches, code can be transformed to contain few,
if any, control dependences. Complex control flow transformations
can instead be performed in the predication domain as traditional
straight-line code optimizations. In the same way, the
predicated representation allows scheduling among branches to
be performed in a domain without control dependences. The removal
of these control dependences increases scheduling scope
and affords new freedom to the scheduler [5].
Predicated execution is an architectural model which supports
direct execution of the predicated representation [6][7][8].
With respect to a conventional instruction set architecture, the
new features are an additional boolean source operand guarding
each instruction and a set of compare instructions used to
compute predicates. Predicated execution benefits directly from
the advantages of compilation using the predicated representa-
tion. In addition, the removal of branches yields performance
benefits in the executed code, the most notable of which is the
removal of branch misprediction penalties. In particular, the removal
of frequently mispredicted branches yields large performance
gains [9][10][11]. Predicated execution also provides an
efficient mechanism for a compiler to overlap the execution of
multiple control paths on the hardware. In this manner, processor
performance may be increased by exploiting ILP across multiple
program paths. Another, more subtle, benefit of predicated execution
is that it allows height reduction along a single program
path [12].
Supporting predicated execution introduces two compilation
issues: what should be if-converted and when in the compilation
procedure if-conversion should be applied. The first question to
address is what should be if-converted or, more specifically, what
branches should be removed via if-conversion. Traditionally, full
if-conversion has led to positive results for compiling numerical
applications [13]. However, for non-numeric applications, selective
if-conversion is essential to achieve performance gains [14].
If-conversion works by removing branches and combining multiple
paths of control into a single path of conditional instructions.
However, when two paths are overlapped, the resultant path can
exhibit increased constraints over those of the original paths. One
important constraint is resources. Paths which are combined together
must share processor resources. The compiler has the responsibility
of managing the available resources when making
if-conversion decisions so that an appropriate stopping point may
be identified. Further if-conversion will only result in an increase
in execution time for all the paths involved. As will be discussed
in the next section, the problem of deciding what to if-convert
is complicated by many factors, only one of which is resource
consumption.
The second question that must be addressed is when to apply
if-conversion in the compilation procedure. At the broadest level,
if-conversion may be applied early in the backend compilation
procedure or delayed to occur in conjunction with scheduling.
Applying if-conversion early enables the full use of the predicate
representation by the compiler to facilitate ILP optimizations and
scheduling. In addition, complex control flow transformations
may be recast in the data dependence domain to make them practical
and profitable. Examples of such transformations include
branch reordering, control height reduction [12], and branch combining
[15]. On the other hand, delaying if-conversion to as late
as possible makes answering the first question much more practi-
cal. Since many of the if-conversion decisions are tightly coupled
to the scheduler and its knowledge of the processor characteris-
tics, applying if-conversion at schedule time is the most natural
choice. Also, applying if-conversion during scheduling alleviates
the need to make the entire compiler backend cognizant of a predicated
representation.
An effective compiler strategy for predicated execution must
address the "what" and "when" questions of if-conversion. The
purpose of this paper is to present a flexible framework for if-conversion
in ILP compilers. The framework enables the compiler
to extract the full benefits of the predicate representation by
applying aggressive if-conversion early in the compilation pro-
cedure. A novel mechanism called partial reverse if-conversion
then operates at schedule time to facilitate balancing the amount
of control flow and predication present in the generated code,
based on the characteristics of the target processor.
The remainder of this paper is organized as follows. Section
2 details the compilation issues and challenges associated
with compiling for predicated execution. Section 3 introduces
our proposed compilation framework that facilitates taking full
advantage of the predicate representation as well as achieving an
efficient balance between branching and predication in the final
code. The essential component in this framework, partial reverse
if-conversion, is described in detail in Section 4. The effectiveness
of this framework in the context of our prototype compiler
for ILP processors is presented in Section 5. Finally, the paper
concludes in Section 6.
Compilation Challenges
Effective use of predicated execution provides a difficult challenge
for ILP compilers. Predication offers the potential for large
performance gains when it is efficiently utilized. However, an imbalance
of predication and control flow in the generated code can
lead to dramatic performance losses. The baseline compilation
support for predicated execution assumed in this paper is the hyperblock
framework. Hyperblocks and the issues associated with
forming quality hyperblocks are first summarized in this section.
The remainder of this section focuses on an approach of forming
hyperblocks early in the compilation procedure using heuristics.
This technique is useful because it exposes the predicate representation
throughout the backend optimization and scheduling pro-
cess. However, this approach has several inherent weaknesses.
Solving these weaknesses is the motivation for the framework
presented in this paper.
2.1 Background
The hyperblock is a structure created to facilitate optimization
and scheduling for predicated architectures [14]. A hyperblock is
a set of predicated basic blocks in which control may only enter
from the top, but may exit from one or more locations. Hyperblocks
are formed by applying tail duplication and if-conversion
over a set of carefully selected paths. Inclusion of a path into
a hyperblock is done by considering its profitability. The profitability
is determined by four pieces of information: resource
utilization, dependence height, hazard presence, and execution
frequency. One can gain insights into effective hyperblock formation
heuristics by understanding how each characteristic can
lead to performance loss.
The most common cause of poor hyperblock formation is excessive
resource consumption. The resources required by overlapping
the execution of multiple paths are the union of the resources
required for each individual path. Consider if-converting
a simple if-then-else statement. The resultant resource consumption
for the hyperblock will be the combination of the resources
required to separately execute the "then" and "else" paths. If
each path alone consumes almost all of the processor resources,
the resultant hyperblock would require substantially more resources
than the processor has available. As a result, hyperblock
formation results in a significant slowdown for both paths. Of
course, these calculations do not account for the benefits gained
by if-conversion. The important point is that resource over-subscription
has the potential to negate all benefits of hyperblock
formation or even degrade performance.
Poor hyperblocks may also be formed by not carefully considering
the dependence height of if-conversion. A hyperblock
which contains multiple paths will not complete until all of its
constituent paths have completed. Therefore, the overall height
of the hyperblock is the maximum of all the original paths' dependence
heights. Consider the if-conversion of a simple if-then-
else statement with the "then" path having a height of two and
the "else" path having a height of four. The height of the resultant
hyperblock is the maximum of both paths, four. As a result,
the "then" path is potentially slowed down by two times. The
compiler must weigh this negative against the potential positive
effects of if-conversion to determine whether this hyperblock is
profitable to form.
Another way poor hyperblocks may be formed is through
the inclusion of a path with a hazard. A hazard is any instruction
or set of instructions which hinders efficient optimization or
scheduling of control paths other than its own. Two of the most
common hazards are subroutine calls with unknown side effects
and store instructions which have little or no alias information.
Hazards degrade performance because they force the compiler to
make conservative decisions in order to ensure correctness. For
this reason, inclusion of a control path with a hazard into a hyperblock
generally reduces the compiler's effectiveness for the entire
hyperblock, not just for the path containing the hazard.
Execution frequency is used as a measure of a path's importance
and also provides insight into branch behavior. This information
is used to weigh the trade-offs made in combining execution
paths. For example, it may be wise to penalize an infrequently
executing path by combining it with a longer frequently
executing path and removing the branch joining the two.
2.2 Pitfalls of Hyperblock Selection
The original approach used in the IMPACT compiler to support
predicated execution is to form hyperblocks using heuristics
based on the four metrics described in the previous section. Hyperblocks
are formed early in the backend compilation procedure
to expose the predicate representation throughout all the back-end
compilation phases. Heuristic hyperblock formation has been
shown to perform well for relatively regular machine models. In
these machines, balancing resource consumption, balancing dependence
height, and eliminating hazards are done effectively by
the carefully crafted heuristics. However, experience shows that
several serious problems exist that are difficult to solve with this
approach. Three such problems presented here are optimizations
that change code characteristics, unpredictable resource interfer-
ence, and partial path inclusion.
Optimization. The first problem occurs when code may be
transformed after hyperblock formation. In general, forming hyperblocks
early facilitates optimization techniques that take advantage
of the predicate representation. However, the hyperblock
formation decisions can change dramatically with compiler trans-
formations. This can convert a seemingly bad formation decision
into a good one. Likewise, it can convert a seemingly good formation
decision into a bad one.
Figure 1a shows a simple hammock to be considered for if-
conversion. 1 The taken path has a dependence height of three cycles
and consumes three instruction slots after if-conversion has
removed instruction 5. The fall-through path consists of a depen-
1 For all code examples presented in this section, a simple machine
model is used. The schedules are for a three issue processor with unit
latencies. Any resource limitations for the processor that are assumed are
specified with each example. These assumptions do not reflect the machine
model or latencies used in the experimental evaluation (Section 5).2(8)
(1)
(a)
(b)
(1) branch Cond
(2)
jump
Figure
1: Hyperblock formation of seemingly incompatible paths
with positive results due to code transformations. The T and F
annotations in (a) indicate the taken and fall-through path for the
conditional branch. r2 is not referenced outside the T block.
dence height of six cycles and a resource consumption of six instruction
slots. A simple estimate indicates that combining these
paths would result in a penalty for the taken path of three cycles
due to the fall-through path's large dependence height. Figure 1b
shows this code segment after hyperblock formation and further
optimizations. The first optimization performed was renaming to
eliminate the false dependences 7 ! 8 and 8 ! 10. This reduced
the dependence height of the hyperblock to three cycles.
If a heuristic could foresee that dependence height would no
longer be an issue, it may still choose not to form this hyperblock
due to resource considerations. An estimate of ten instructions
after if-conversion could be made by inspecting Figure 1a. Un-
fortunately, ten instructions needs at least four cycles to complete
on a three issue machine, which would still penalize the taken
path by one cycle, indicating that the combination of these paths
may not be beneficial. After an instruction merging optimization
in which instructions 2 and 6 were combined and 4 and 11 were
combined, the instruction count becomes eight. The final schedule
consists of only three cycles.
Figure
1 shows that even in simple cases a heuristic which
forms hyperblocks before some optimizations must anticipate the
effectiveness of those optimizations in order to form profitable
hyperblocks. In this example, some optimizations could potentially
be done before hyperblock formation, such as renaming.
However, others, like instruction merging, could not have been.
In addition, some optimizations may have been applied differently
when performed before if-conversion, because the different
code characteristics will result in different trade-offs.
Resource Interference. A second problem with heuristic
hyperblock formation is that false conclusions regarding the resource
compatibility of the candidate paths may often be reached.
As a result, paths which seem to be compatible for if-conversion
turn out to be incompatible. The problem arises because resource
usage estimation techniques, such as the simple ones used in this
section or even other more complex techniques, generally assume
(1) branch Cond
(2)
jump
(a)2
(b)
Figure
2: Hyperblock formation of seemingly compatible paths
that results in performance loss due to resource incompatibility.
that resource usage is evenly distributed across the block. In prac-
tice, however, few paths exhibit uniform resource utilization. Interactions
between dependence height and resource consumption
cause violations of the uniform utilization assumption. In gen-
eral, most paths can be subdivided into sections that are either
relatively parallel or relatively sequential. The parallel sections
demand a large number of resources, while the sequential sections
require few resources. When two paths are combined, resource
interference may occur when the parallel sections of the
paths overlap. For those sections, the demand for resources is
likely to be larger than the available resources, resulting in a performance
loss.
To illustrate this problem, consider the example in Figure 2.
The processor assumed for this example is again three issue, but
at most one memory instruction may be issued each cycle. The
original code segment, Figure 2a, consists of two paths with dependence
heights of three cycles. The resource consumption of
each path is also identical, four instructions. These paths are concluded
to be good candidates for if-conversion. Figure 2b shows
the hyperblock and its resulting schedule. Since there are no obvious
resource shortages, one would expect the resultant schedule
for the hyperblock to be identical in length to the schedules of
each individual path, or four cycles. However, the hyperblock
schedule length turns out to be six cycles. This increase is due to
resource interference between the paths. Each path is parallel at
the start and sequential at the end. In addition, the parallel sections
of both paths have a high demand for the memory resource.
With only one memory resource available, the paths are sequentialized
in parallel sections. Note that if the requirements for the
memory resource were uniformly distributed across both paths,
this problem would not exist as the individual schedule lengths
are four cycles and there are a total of four memory instructions.
However, due to the characteristics of these paths, resource interference
results in a performance loss for both paths selected for
the hyperblock.
(1)
(2)
F
branch r1 > r10
jump
(c)
(a)2
(1)
jump
(b)
Figure
3: An efficient hyperblock formed through the inclusion
of a partial path.
Partial Paths. The final problem with current heuristic hyperblock
formation is that paths may not be subdivided when they
are considered for inclusion in a hyperblock. In many cases, including
part of a path may be more beneficial than including or
excluding that entire path. Such an if-conversion is referred to as
partial if-conversion. Partial if-conversion is generally effective
when the resource consumption or dependence height of an entire
candidate path is too large to permit profitable if-conversion, but
there is a performance gain by overlapping a part of the candidate
path with the other paths selected for inclusion in the hyperblock.
To illustrate the effectiveness of partial if-conversion, consider
the example in Figure 3. The three issue processor assumed for
this example does not have any resource limitations other than the
issue width. Figure 3a shows two paths which are not compatible
due to mismatched dependence height. However, by including all
of the taken path and four instructions from the fall-through path,
an efficient hyperblock is created. This hyperblock is shown in
Figure 3b. Notice that branch instruction 2 has been split into
two instructions: the condition computation, labeled 2 0 , and a
branch based on that computation, labeled 2 00 . The schedule did
not benefit from the complete removal of branch instruction 2, as
the branch instruction 2 00 has the same characteristics as the orig-
inal. However, the schedule did benefit from the partial overlap
of both paths. The destination of branch instruction 2 00 contains
the code to complete the fall-through path is shown in Figure 3c.
In theory, hyperblock formation heuristics may be extended to
support partial paths. Since each path could be divided at any instruction
in the path, the heuristics would have to consider many
more possible selection alternatives. However, the feasibility of
extending the selection heuristics to operate at the finer granularity
of instructions, rather than whole paths, is questionable due
the complex nature of the problem.
3 Proposed Compilation Framework
Compilation for predicated execution can be challenging as
described in Section 2. To create efficient code, a delicate balance
between control flow and predication must be created. The
desired balance is highly dependent on final code characteristics
and the resource characteristics of the target processor. An effective
compilation framework for predicated execution must provide
a structure for making intelligent tradeoffs between control
flow and predication so the desired balance can be achieved.
Given the difficulties presented in Section 2.2 with forming
hyperblocks early in the backend compilation process, a seemingly
natural strategy is to perform if-conversion in conjunction
with instruction scheduling. This can be achieved by integrating
if-conversion within the scheduling process itself. A scheduler
not only accurately models the detailed resource constraints of
the processor but also understands the performance characteristics
of the code. Therefore, the scheduler is ideally suited to make
intelligent if-conversion decisions. In addition, all compiler optimizations
are usually complete when scheduling is reached, thus
the problem of the code changing after if-conversion does not exist
However, a very serious problem associated with performing
if-conversion during scheduling time is the restriction on the com-
piler's use of the predicate representation to perform control flow
transformations and predicate specific optimizations. With the
schedule-time framework, the introduction of the predicate representation
is delayed until schedule time. As a result, all transformations
targeted to the predicate representation must either be
foregone or delayed. If these transformations are delayed, much
more complexity is added to a scheduler which must already consider
many issues including control speculation, data speculation,
and register pressure to achieve desirable code performance. Ad-
ditionally, delaying only some optimizations until schedule time
creates a phase ordering which can cause severe difficulties for
the compiler. Generally, most transforms have profound effects
on one another and must be repeatedly applied in turn to achieve
desirable results. For example, a transformation, such as control
height reduction [12], may subsequently expose a critical data
dependence edge that should be broken by expression reformu-
lation. However, until the control dependence height is reduced,
there is no profitability to breaking the data dependence edge, so
the compiler will not apply the transform. This is especially true
since expression reformulation has a cost in terms of added in-
structions. The net result of the schedule-time framework is a
restriction in the use of the predicate representation which limits
the effectiveness of back-end optimizations.
Given that if-conversion at schedule time limits the use of
the predicate representation for optimization and given that if-conversion
at an early stage is limited in its ability to estimate
the final code characteristics, it is logical to look to an alternative
compilation framework. This paper proposes such a frame-
work. This framework overcomes limitations of other schemes
by utilizing two phases of predicated code manipulation to support
predicated execution. Aggressive if-conversion is applied
in an early compilation phase to create the predicate representation
and to allow flexible application of predicate optimizations
throughout the backend compilation procedure. Then at sched-
Classical Optimizations
Classical Optimizations
Optimizations
ILP Optimizations
Register Allocataion
Postpass Scheduling
and Partial Reverse If-Conversion
Integrated Prepass Scheduling
Aggressive Hyperblock Formation
Figure
4: Phase ordering diagram for the compilation framework.
ule time, the compiler adjusts the final amount of predication to
efficiently match the target architecture. The compilation frame-
work, shown in Figure 4, consists of two phases of predicate manipulation
surrounding classical, predicate specific, and ILP op-
timizations. The first predicate manipulation phase, hyperblock
formation, has been addressed thoroughly in [14]. The second
predicate manipulation phase, adjustment of hyperblocks during
scheduling, is proposed in this work and has been termed partial
reverse if-conversion.
The first phase of the compilation framework is to aggressively
perform hyperblock formation. The hyperblock former
does not need to exactly compute what paths, or parts of paths,
will fit in the available resources and be completely compatible
with each other. Instead, it forms hyperblocks which are larger
than the target architecture can handle. The large hyperblocks
increase the scope for optimization and scheduling, further enhancing
their benefits. In many cases, the hyperblock former will
include almost all the paths. This is generally an aggressive decision
because the resource height or dependence height of the
resulting hyperblock is likely to be much greater than the corresponding
heights of any of its component paths. However, the
if-converter relies on later compilation phases to ensure that this
hyperblock is efficient. One criteria that is still enforced in the
first phase of hyperblock formation is avoiding paths with haz-
ards. As was discussed in Section 2, hazards reduce the com-
piler's effectiveness for the entire hyperblock, thus they should
be avoided to facilitate more aggressive optimization.
The second phase of the compilation framework is to adjust
the amount of predicated code in each hyperblock as the
code is scheduled via partial reverse if-conversion. Partial reverse
if-conversion is conceptually the application of reverse if-conversion
to a particular predicate in a hyperblock for a chosen
set of instructions [16]. Reverse if-conversion was originally proposed
as the inverse process to if-conversion. Branching code
that contains no predicates is generated from a block of predicated
code. This allows code to be compiled using a predicate
representation, but executed on a processor without support for
predicated execution.
The scheduler with partial reverse if-conversion operates by
identifying the paths composing a hyperblock. Paths which are
profitable to overlap remain unchanged. Conversely, a path that
interacts poorly with the other paths is removed from the hyper-
block. In particular, the partial reverse if-converter decides to
eject certain paths, or parts of paths, to enhance the schedule. To
do this, the reverse if-converter will insert a branch that is taken
whenever the removed paths would have been executed. This has
the effect of dividing the lower portion of the hyperblock into two
parts, corresponding to the taken and fall-through paths of the inserted
branch. The decision to reverse if-convert a particular path
consists of three steps. First, the partial reverse if-converter determines
the savings in execution time by inserting control flow
and applying the full resources of the machine to two hyperblocks
instead of only one. Second, it computes the loss created by any
penalty associated with the insertion of the branch. Finally, if
the gain of the reverse if-conversion exceeds the cost, it is ap-
plied. Partial reverse if-conversion may be repeatedly applied to
the same hyperblock until the resulting code is desirable.
The strategy used for this compilation framework can be
viewed analogously to the use of virtual registers in many compil-
ers. With virtual registers, program variables are promoted from
memory to reside in an infinite space of virtual registers early in
the compilation procedure. The virtual register domain provides
a more effective internal representation than do memory operations
for compiler transformations. As a result, the compiler is
able to perform more effective optimization and scheduling on
the virtual register code. Then, at schedule time, virtual registers
are assigned to a limited set of physical registers and memory
operations are reintroduced as spill code when the number of
physical registers was over-subscribed. The framework presented
in this paper does for branches what virtual registers do for program
variables. Branches are removed to provide a more effective
internal representation for compiler transformations. At schedule
time, branches are inserted according to the capabilities of the target
processor. The branches reinserted have different conditions,
targets, and predictability than the branches originally removed.
The result is that the branches in the code are there for the benefit
of performance for a particular processor, rather than as a consequence
of code structure decisions made by the programmer.
The key to making this predication and control flow balancing
framework effective is the partial reverse if-converter. The mechanics
of performing partial reverse if-conversion, as well as a
proposed policy used to guide partial reverse if-conversion, are
presented in the next section.
4 Partial Reverse If-Conversion
The partial reverse if-conversion process consists of three
components: analysis, transformation, and decision. Each of
these steps is discussed in turn.
4.1 Analysis
Before any manipulation or analysis of execution paths can be
performed, these paths must be identified in the predicated code.
Execution paths in predicated code are referred to as predicate
paths. Immediately after hyperblock formation, the structure of
the predicate paths is identical to the control flow graph of the
(2)
(1)000000111111111111 000000000000111111000000000000111111000000000000000000000011111111111000000000000000000000011111111111
(a)
(2)
(b)
Figure
5: Predicate flow graph with partial dead code elimination
given that r3 and r4 are not live out of this region.
code before hyperblock formation. The structure of the predicate
paths can be represented in a form called the predicate flow
graph (PFG). The predicate flow graph is simply a control flow
graph (CFG) in which predicate execution paths are also repre-
sented. After optimizations, the structure of the PFG can change
dramatically. For reasons of efficiency and complexity, the compiler
used in this work does not maintain the PFG across opti-
mizations, instead it is generated from the resulting predicated
N-address code.
The synthesis of a PFG from predicated N-address code is
analogous to creating a CFG from N-address code. A simple example
is presented to provide some insight into how this is done.
Figure
5 shows a predicated code segment and its predicate flow
graph. The predicate flow graph shown in Figure 5b is created
in the following manner. The first instruction in Figure 5a is a
predicate definition. At this definition, p1 can assume TRUE or
FALSE.A path is created for each of these possibilities. The complement
of p1, p2, shares these paths because it does not independently
create new conditional outcomes. The predicate defining
instruction 2 also creates another path. In this case, the predicates
p3 and p4 can only be TRUE if p1 is TRUE because their defining
instructions is predicated on p1, so only one more path is cre-
ated. The creation of paths is determined by the interrelations of
predicates, which are provided by mechanisms addressed in other
work [17][18]. For the rest of the instructions, the paths that contain
these instructions are determined by the predicate guarding
their execution. For example, instruction 3 is based on predicate
p1 and is therefore only placed in paths where p1 is TRUE. Instruction
4 is not predicated and therefore exists in all paths. The
type of predicate defines used in all figures in this paper are un-
conditional, meaning they always write a value [8]. Since they
some value regardless of their predicate, their predicate can
be ignored, and the instruction's destinations must be placed in
all paths.
Paths in a PFG can be merged when a predicate is no longer
used and does not affect any other predicate later in the code.
However, this merging of paths may not be sufficient to solve all
(1)
jump
(b)
(a)
(2)
Figure
flow graph (a) and a partial reverse if-conversion
of predicate p1 located after instructions 1 and 2 (b).
potential path explosion problems in the PFG. This is because
the number of paths in a PFG is exponentially proportional to
the number of independent predicates whose live ranges overlap.
Fortunately, this does not happen in practice until code schedul-
ing. After code scheduling, a complete PFG will have a large
number of paths and may be costly. A description of how the
partial reverse if-converter overcomes this problem is located in
Section 4.2. A more general solution to the path explosion problem
for other aspects of predicate code analysis is currently being
constructed by the authors.
With a PFG, the compiler has the information necessary to
know which instructions exist in which paths. In Figure 5, if the
path in which p1 and p3 are TRUE is to be extracted, the instructions
which would be placed into this path would be 3, 4 and 7.
The instructions that remain in the other two paths seem to be
3, 4, 5, and 6. However, inspection of the dataflow characteristics
of these remaining paths reveals that the results of instructions
3 and 4 are not used, given that r3 and r4 are not live out
of this region. This fact makes these instructions dead code in
the context of these paths. Performing traditional dead code removal
on the PFG, instead of the CFG, determines which parts
of these operations are dead. Since this application of dead code
removal only indicates that these instructions are dead under certain
predicate conditions, this process is termed predicate partial
dead code removal and is related to other types of partial dead
code removal [19]. The result of partial dead code removal indicates
that instructions 3 and 4 would generate correct code and
would not execute unnecessarily if they were predicated on p3.
At this point, all the paths have been identified and unnecessary
code has been removed by partial dead code removal. The
analysis and possible ejection of these paths now becomes possible
4.2 Transformation
Once predicate analysis and partial dead code elimination
have been completed, performing reverse if-conversion at any
point and for any predicate requires a small amount of additional
processing. This processing determines whether each instruction
belongs in the original hyperblock, the new block formed by re-
(a)
Figure
7: Simple code size reduction on multiple partial reverse
if-conversions applied to an unrolled loop. Each square represents
an unroll of the original loop.
verse if-conversion, or both. Figure 6 is used to aid this discussion
The partial reverse if-converted code can be subdivided into
three code segments. These are: the code before the reverse if-
converting branch, the code ejected from the hyperblock by reverse
if-conversion, and the code which remains in the hyperblock
below the reverse if-converting branch. Instructions before
the location of the partial reverse if-converting branch are left untouched
in the hyperblock. Figure 6b shows the partial reverse if-conversion
created for p1 after instructions 1 and 2. This means
that instructions 1 and 2 are left in their originally scheduled location
and the reverse if-converting branch, predicated on p1, is
scheduled immediately following them. The location of instructions
after the branch is determined by the PFG. To use the PFG
without experiencing a path explosion problem, the PFG's generated
during scheduling are done only with respect to the predicate
which is being reverse if-converted. This keeps the number of
paths under control since a the single predicate PFG can contain
no more than two paths. Figure 6a shows the PFG created for
the predicate to be reverse if-converted, p1. Note that the partial
dead code has already been removed as described in the previous
section. Instructions which exist solely in the p1 is FALSE
path, such as 5 and 6, remain in the original block. Instructions
which exist solely in the p1 is TRUE path, such as 3, 4, and 7,
are moved from the original block to the newly formed region.
An instruction which exists in both paths must be placed in both
regions.
Notice that the hyperblock conditionally jumps to the code removed
from the hyperblock but there is no branch from this code
back into the original hyperblock. While this is possible, it was
not implemented in this work. Branching back into the hyperblock
would violate the hyperblock semantics since it would no
longer be a single entry region. Violating hyperblock semantics
may not be problematic since the benefits of the hyperblock have
already been realized by the optimizer and prepass scheduler.
However, the postpass hyperblock scheduler may experience reduced
scheduling freedom since all re-entries into the hyperblock
effectively divide the original hyperblock into two smaller hyperblocks
The advantage of branching back into the original hyperblock
is a large reduction in code size through elimination of unnecessarily
duplicated instructions. However, as will be shown in
the experimental section, code size was generally not a problem.
One code size optimization which was performed merges targets
of partial reverse if-conversion branches if the target blocks are
identical. This resulted in a large code size reduction in codes
where loop unrolling was performed. If a loop in an unrolled
hyperblock needed to be reverse if-converted, it is likely that all
iterations needed to be reverse if-converted. This creates many
identical copies of the loop body subsequent to the loop being reverse
if-converted. Figure 7a shows the original result of repeated
reverse if-conversions on an unrolled loop. Figure 7b shows the
result obtained by combining identical targets. While this simple
method works well in reducing code growth, it does not eliminate
all unnecessary code growth. To remove all unnecessary
code growth, a method which jumps back into the hyperblock at
an opportune location needs to be created.
4.3 Policy
After creating the predicate flow graph and removing partial
dead code, the identity and characteristics of all paths in a hyperblock
are known. With this information, the compiler can make
decisions on which transformations to perform. The decision process
for partial reverse if-conversion consists of two parts: deciding
which predicates to reverse if-convert and deciding where to
reverse if-convert the selected predicates. To determine the optimal
reverse if-conversion for a given architecture, the compiler
could exhaustively try every possible reverse if-conversion, compute
the optimal cycle count for each possibility, and choose the
one with the best performance. Unfortunately, there are an enormous
number of possible reverse if-conversions for any given hy-
perblock. Consider a hyperblock with p predicates and n instruc-
tions. This hyperblock has 2 p combinations of predicates chosen
for reverse if-conversion. Each of these reverse if-conversions
can then locate its branch in up to n locations in the worst case.
Given that each of these possibilities must be scheduled to measure
its cycle count, this can be prohibitively expensive. Obvi-
ously, a heuristic is needed. While many heuristics may perform
effective reverse if-conversions, only one is studied in this paper.
This heuristic may not be the best solution in all applications,
but for the machine models studied in this work it achieves a desirable
balance between final code performance, implementation
complexity, and compile time.
The process of choosing a heuristic to perform partial reverse
if-conversion is affected greatly by the type of scheduler used.
Since partial reverse if-conversion is integrated into the prepass
scheduler, the type of information provided by the scheduler and
the structure of the code at various points in the scheduling process
must be matched with the decision of what and where to
if-convert. An operation-based scheduler may yield one type of
heuristic while a list scheduler may yield another. The policy determining
how to reverse if-convert presented here was designed
to work within the context of an existing list scheduler. The algorithm
with this policy integrated into the list scheduler is shown
in
Figure
8.
The first decision addressed by the proposed heuristic is where
to place a predicate selected for reverse if-conversion. If a location
can be shown to be generally more effective than the rest,
then the number of locations to be considered for each reverse
if-conversion can be reduced from n to 1, an obvious improve-
ment. Such a location exists under the assumption that the reverse
if-converting branch consumes no resources and the code
is scheduled by a perfect scheduler. It can be shown that there
Number of operations;
// Each trip through this loop is a new cycle
6 WHILE num unsched != 0 DO
// Handle reverse if-converting branches first
7 FOREACH ric op IN ric queue DO
8 IF Schedule Op(ric op, cycle) THEN
9 Compute location for each unscheduled op;
sched ric taken = Compute dynamic cycles in ric taken path;
sched ric cycles in ric hyperblock;
mipred ric = Estimate ric mispreds * miss penalty;
13 ric cycles = sched ric hb sched ric taken ;
14 ric cycles = ric cycles mispred ric ;
(sched no ric ? ric cycles) THEN
sched ric hb ;
17 Place all ops in their no ric schedule location;
19 Unschedule OP(ric op);
Remove ric op from ric queue;
// Then handle regular operations
22 IF Schedule Op(regular op, cycle) THEN
Remove regular op from ready priority queue;
26 Add reverse if-converting branch to ric queue;
Figure
8: An algorithm incorporating partial reverse if-conversion
into a list scheduler
is no better placement than the first cycle in which the value of
the predicate to be reverse if-converted is available after its predicate
defining instruction. 2 Since the insertion of the branch has
the same misprediction or taken penalty regardless of its location,
these effects do not favor one location over another. However, the
location of the reverse if-converting branch does determine how
early the paths sharing the same resources are separated and given
the full machine bandwidth. The perfect scheduler will always
do as well or better when the full bandwidth of the machine is divided
among fewer instructions. Given this, the earlier the paths
can be separated, the fewer the number of instructions competing
for the same machine resources. Therefore, a best schedule will
occur when the reverse if-converting branch is placed as early as
possible.
Despite this fact, placing the the reverse if-converting branch
as early as possible is a heuristic. This is because the two assumptions
made, a perfect scheduler and no cost for the reverse
if-converting branch, are not valid in general. It seems reason-
able, however, that this heuristic would do very well despite these
imperfections. Another consideration is code size, since instructions
existing on multiple paths must be duplicated when these
paths are seperated. The code size can be reduced if the reverse
if-converting branch is delayed. Depending on the characteristics
There exist machines where the placement of a branch a number of
cycles after the computation of its condition removes all of its mispredictions
[20]. In these machines, there are two locations which should be
considered, immediately after the predicate defining instruction and in the
cycle in which the branch mispredictions are eliminated.
of the code, this delay may have no cost or a small cost which
may be less than the gain obtained by the reduction in code size.
Despite these considerations, the placement of the partial reverse
if-converting branch as early as possible is a reasonable choice.
The second decision addressed by the heuristic is what to reverse
if-convert. Without a heuristic, the number of reverse if-
conversions which would need to be considered with the heuristic
described above is 2 p . The only way to optimally determine
which combination of reverse if-conversions yields the best results
is to try them all. A reverse if-conversion of one predicate
can affect the effectiveness of other reverse if-conversions. This
interaction among predicates is caused by changes in code char-
acteristecs after a reverse if-conversion has removed instructions
from the hyperblock.
In the context of a list scheduler, a logical heuristic is to consider
each potential reverse if-conversion in a top-down fashion,
in the order in which the predicate defines are scheduled. This
heuristic is used in the algorithm shown in Figure 8. This has the
desirable effect of making the reverse if-conversion process fit
seemlessly into a list scheduler. It is also desirable because each
reverse if-conversion is considered in the context of the decisions
made earlier in the scheduling process.
In order to make a decision on each reverse if-conversion, a
method to evaluate it must be employed. For each prospective reverse
if-conversion, three schedules must be considered: the code
schedule without the reverse if-conversion, the code schedule of
the hyperblock with the reverse if-converting branch inserted and
paths excluded, and the code schedule of the paths excluded by
reverse if-conversion. Together they yield a total of 3p schedules
for a given hyperblock. Each of these three schedules needs
to be compared to determine if a reverse if-conversion is prof-
itable. This comparison can be written as: sched cyclesno ric ?
sched cycles ric hb sched cycles ric taken
miss penalty) where sched cyclesno ric is the number of dynamic
cycles in the schedule without reverse if-conversion ap-
plied, sched cycles ric hb is the number of dynamic cycles in the
schedule of the transformed hyperblock, sched cycles ric taken
is the number of dynamic cycles in the target of the reverse
if-conversion, and mispredric is the number of mispredictions
introduced by the reverse if-conversion branch. The
mispredric can be obtained through profiling or static estimates.
miss penalty is the branch misprediction penalty. This comparision
is computed by lines 9 through 15 in Figure 8.
While the cost savings due to the heuristic is quite significant,
3p schedules for more complicated machine models can still be
quite costly. To reduce this cost, it is possible to reuse information
gathered during one schedule in a later schedule.
The first source of reuse is derived from the top-down property
of the list scheduler itself. At the point each reverse if-conversion
is considered, all previous instructions have been scheduled in
their final location by lines 8 or 22 in Figure 8. Performing the
scheduling on the reverse if-conversion and the original scenario
only needs to start at this point. The number of schedules is
still 3p, but the number of instructions in each schedule has been
greatly reduced by the removal of instructions already scheduled.
The second source of reuse takes advantage of the fact that,
for the case in which the reverse if-conversion is not done, the
schedule has already been computed. At the time the previous
predicate was considered for reverse if-conversion, the schedule
was computed for each outcome. Since the resulting code schedule
in cycles is already known, no computation is necessary for
the current predicate's sched cyclesno ric . This source of reuse
takes the total schedules computed down to 2p with each
schedule only considering the unscheduled instructions at each
point due to the list scheduling effect. This reuse is implemented
in
Figure
8 by lines 5 and 16.
Another way to reduce the total number of instructions scheduled
is to take advantage of the fact that the code purged from
the block is only different in the "then" and "else" blocks but not
in the control equivalent split or join blocks. Once the scheduler
has completely scheduled the "then" and "else" parts, no further
scheduling is necessary since the remaining schedules are likely
to be very similar. The only differences may be dangling latencies
or other small differences in the available resources at the
boundary. To be more accurate, the schedules can continue until
they become identical, which is likely to occur at some point,
though is not guaranteed to occur in all cases. An additional use
for the detection of this point is code size reduction. This point is
a logical location to branch from the ejected block back into the
original hyperblock.
With all of the above schedule reuse and reduction techniques,
it can be shown that the number of times an instruction is scheduled
is usually 1 d is that instruction's depth in its
hammock. In the predication domain, this depth is the number of
predicates defined in the chain used to compute that instruction's
guarding predicate.
If the cost of scheduling is still high, estimates may be used in-
stead. There are many types of scheduling estimates which have
been proposed and can be found in the literature. While many
may do well for machines with regular structures, others do not.
It is possible to create a hybrid scheduler/estimator which may
balance good estimates with compile time cost. As mentioned
previously, the schedule height of the two paths in the hammock
must be obtained. Instead of purely scheduling both paths, which
may be costly, or just estimating both paths, which may be inac-
curate, a part schedule and part estimate may obtain more accurate
results with lower cost. In the context of a list scheduler, one
solution is the following. The scheduler could schedule an initial
set of operations and estimate the schedule on those remain-
ing. Accurate results will be obtained by the scheduled portion,
in addition, the estimate may be able to benefit from information
obtained from the schedule, as the characteristics of the scheduled
code may be likely to match the characteristics of the code
to be estimated. In the experiments presented in the next section,
actual schedules are used in the decision to reverse if-convert because
the additional compile time was acceptable.
5 Experimental Results
This section presents an experimental evaluation of the partial
reverse if-conversion framework.
5.1 Methodology
The partial reverse if-conversion techniques described in this
paper have been implemented in the second generation instruction
scheduler of the IMPACT compiler. The compiler utilizes
a machine description file to generate code for a parameterized
superscalar processor. To measure the effectiveness of the partial
reverse if-conversion technique, a machine model similar to many
current processors was chosen. The machine modeled is a 4-issue
superscalar processor with in-order execution that contains two
integer ALU's, two memory ports, one floating point ALU, and
one branch unit. The instruction latencies assumed match those
of the HP PA-7100 microprocessor. The instruction set contains
a set of non-trapping versions of all potentially excepting instruc-
tions, with the exception of branch and store instructions, to support
aggressive speculative execution. The instruction set also
contains support for predication similar to that provided in the
PlayDoh architecture [8].
The execution time for each benchmark is derived from the
static code schedule weighted by dynamic execution frequencies
obtained from profiling. Static branch prediction based on profiling
is also utilized. Previous experience with this method of
run time estimation has demonstrated that it accurately estimates
simulations of an equivalent machine with perfect caches.
The benchmarks used in this experiment consist of 14
non-numeric programs: the six SPEC CINT92 benchmarks,
008.espresso, 022.li, 023.eqntott, 026.compress, 072.sc, and
085.cc1; two SPEC CINT95 benchmarks, 132.ijpeg and 134.perl;
and six UNIX utilities cccp, cmp, eqn, grep, wc, and yacc.
5.2 Results
Figures
compare the performance of the traditional
hyperblock compilation framework and the new compilation
framework with partial reverse if-conversion. The hyperblocks
formed in these graphs represent those currently formed
by the IMPACT compiler's hyperblock formation heuristic for
the target machine. These same hyperblocks were also used
as input to the partial reverse if-converter. The results obtained
are therefore conservative since more aggressive hyperblocks
would create the potential for better results. The
bars represent the speedup achieved by these methods relative
to superblock compilation. This is computed as follows:
superblock cycles=technique cycles. Superblock compilation
performance is chosen as the base because it represents the
best possible performance currently obtainable by the IMPACT
compiler without predication [21].
Figure
9 shows the performance of the hyperblock and partial
reverse if-conversion compilation frameworks assuming perfect
branch prediction. Since branch mispredictions are not factored
in, benchmarks exhibiting performance improvement in this
graph show that predication has performed well as a compilation
model. In particular, the compiler has successfully overlapped the
execution of multiple paths of control to increase ILP. Hyperblock
compilation achieves some speedup for half of the benchmarks,
most notably for 023.eqntott, cmp, 072.sc, grep, and wc. For these
programs, the hyperblock techniques successfully overcome the
problem superblock techniques were having in fully utilizing processor
resources. On the other hand, hyperblock compilation results
in a performance loss for half of the benchmarks. This dichotomy
is a common problem experienced with hyperblocks and
indicates that hyperblocks can do well, but often performance is
victim to poor hyperblock selection.
In all cases, partial reverse if-conversion improved upon or
-40%
-20%
0%
20%
40%
80%
100%
008.espresso 022.li 023.eqntott 026.compress 072.sc 085.cc1 132.ijpeg 134.perl cccp cmp eqn grep wc yacc
Benchmark
Hyperblock Framework
Partial RIC Framework
Figure
9: Performance increase over superblock exhibited by the
hyperblock and partial reverse if-conversion frameworks with no
misprediction penalty.
-40%
-20%
0%
20%
40%
80%
100%
008.espresso 022.li 023.eqntott 026.compress 072.sc 085.cc1 132.ijpeg 134.perl cccp cmp eqn grep wc yacc
Benchmark
Hyperblock Framework
Partial RIC Framework
Figure
10: Performance increase over superblock exhibited by
the hyperblock and partial reverse if-conversion frameworks with
a four cycle misprediction penalty.
matched the performance of the hyperblock code. For six of the
benchmarks, partial reverse if-conversion was able to change a
loss in performance by hyperblock compilation into a gain. This
is most evident for 008.espresso where a 28% loss was converted
into a 39% gain. For 072.sc, 134.perl, and cccp, partial reverse
if-conversion was able to significantly magnify relatively small
gains achieved by hyperblock compilation. These results indicate
that the partial reverse if-converter was successful at undoing
many of the poor hyperblock formation decisions while capitalizing
on the effective ones. For the four benchmarks where
hyperblock techniques were highly effective, 023.eqntott, cmp,
grep, and wc, partial reverse if-conversion does not have a large
opportunity to increase performance since the hyperblock formation
heuristics worked well in deciding what to if-convert.
It is useful to examine the performance of two of the benchmarks
more closely. The worst performing benchmark is 085.cc1,
for which both frameworks result in a performance loss with respect
to superblock compilation. Partial reverse if-conversion was
not completely successful in undoing the bad hyperblock formation
decisions. This failure is due to the policy that requires the
list scheduler to decide the location of the reverse if-converting
branch by its placement of the predicate defining instruction. Un-
fortunately, the list scheduler may delay this instruction as it may
not be on the critical path and is often deemed to have a low
scheduling priority. Delaying the reverse if-conversion point can
have a negative effect on code performance. To some extent this
problem occurs in all benchmarks, but is most evident in 085.cc1.
One of the best performing benchmarks was 072.sc. For this
program, hyperblock compilation increased performance by a fair
margin, but the partial reverse if-conversion increased this gain
substantially. Most of 072.sc's performance gain was achieved
by transforming a single function update. This function with
superblock compilation executes in 25.6 million cycles. How-
ever, the schedule is rather sparse due to a large number of data
and control dependences. Hyperblock compilation increases the
available ILP by eliminating a large fraction of the branches and
overlapping the execution of multiple paths of control. This
brings the execution time down to 19.7 million cycles. While
the hyperblock code is much better than the superblock code, it
has excess resource consumption on some paths which penalizes
other paths. The partial reverse if-converter was able to adjust the
amount of if-conversion to match the available resources to efficiently
utilize the processor. As a result, the execution time for
the update function is reduced to 16.8 million cycles with partial
reverse if-conversion, a 52% performance improvement over the
superblock code.
Figure
shows the performance of the benchmarks in the
same manner as Figure 9 except with a branch misprediction
penalty of four cycles. In general, the relative performance of
hyperblock code is increased the most when mispredicts are considered
because it has the fewest mispredictions. The relative
performance of the partial reverse if-conversion code is also increased
because it has fewer mispredictions than the superblock
code. But, partial reverse if-conversion inserts new branches to
accomplish its transformation, so this code contains more mispredictions
than the hyperblock code. For several of the benchmarks,
the number of mispredictions was actually larger for hyperblock
and partial reverse if-conversion than that of superblock. When
applying control flow transformations in the predicated repre-
sentation, such as branch combining, the compiler will actually
create branches with much higher mispredict rates than those re-
moved. Additionally, the branches created by partial reverse if-conversion
may be more unbiased than the the combination of
branches in the original superblock they represent.
The static code size exhibited by using the hyperblock and partial
reverse if-conversion compilation frameworks with respect to
the superblock techniques is presented in Figure 11. From the fig-
ure, the use of predicated execution by the compiler has varying
effects on the code size. The reason for this behavior is a tradeoff
between increased code size caused by if-conversion with the decreased
code size due to less tail duplication. With superblocks,
tail duplication is performed extensively to customize individual
execution paths. Whereas with predication, multiple paths are
overlapped via if-conversion, so less tail duplication is required.
The figure also shows that the code produced with the partial reverse
if-conversion framework is consistently larger than hyper-
block. On average, the partial reverse if-conversion code is 14%
larger than the hyperblock code, with the largest growth occurring
for yacc. Common to all the benchmarks which exhibit a
-40%
-30%
-20%
-10%
0%
10%
20%
30%
40%
50%
008.espresso 022.li 023.eqntott 026.compress 072.sc 085.cc1 132.ijpeg 134.perl cccp cmp eqn grep wc yacc
Benchmark
Code
Growth
Hyperblock Framework
Partial RIC Framework
Figure
11: Relative static code size exhibited by the hyperblock
and partial reverse if-conversion frameworks compared with superblock
Benchmark Reverse If-Conversions Opportunities
43 443
026.compress 11 56
132.ijpeg 134 1021
134.perl 42 401
cccp 77 1046
Table
1: Application frequency of partial reverse if-conversion.
large code growth was a failure of the simple code size reduction
mechanism presented earlier. Inspection of the resulting code
indicates that many instructions are shared in the lower portion
of the tail-duplications created by the partial reverse if-converter.
For this reason, one can expect these benchmarks to respond well
to a more sophisticated code size reduction scheme.
Finally, the frequency of partial reverse if-conversions that
were performed to generate the performance data is presented
in
Table
1. The "Reverse If-Conversions" column specifies the
actual number of reverse if-conversions that occurred across the
entire benchmark. The "Opportunities" column specifies the
number of reverse if-conversions that could potentially have oc-
curred. The number of opportunities is equivalent to the number
of unique predicate definitions in the application, since each predicate
define can be reverse if-converted exactly once. All data in
Table
are static counts. The table shows that the number of
reverse if-conversions that occur is a relatively small fraction of
the opportunities. This behavior is desirable as the reverse if-
converter should try to minimize the number of branches it inserts
to achieve the desired removal of instructions from a hy-
perblock. In addition, the reverse if-converter should only be
invoked when a performance problem exists. In cases where
the performance of the original hyperblock cannot be improved,
no reverse if-conversions need to be performed. The table also
shows the expected correlation between large numbers of reverse
if-conversions and larger code size increases of partial reverse if-conversion
over hyperblock (Figure 11).
6 Conclusion
In this paper, we have presented an effective framework for
compiling applications for architectures which support predicated
execution. The framework consists of two major parts. First, aggressive
if-conversion is applied early in the compilation process.
This enables the compiler to take full advantage of the predicate
representation to apply aggressive ILP optimizations and control
flow transformations. The second component of the framework is
applying partial reverse if-conversion at schedule time. This delays
the final if-conversion decisions until the point during compilation
when the relevant information about the code content and
the processor resource utilization are known.
A first generation partial reverse if-converter was implemented
and the effectiveness of the framework was measured for
this paper. The framework was able to capitalize on the benefits of
predication without being subject to the sometimes negative side
effects of over-aggressive hyperblock formation. Furthermore,
additional opportunities for performance improvement were exploited
by the framework, such as partial path if-conversion.
These points were demonstrated by the hyperblock performance
losses which were converted into performance gains, and by moderate
gains which were further magnified. We expect continuing
development of the partial reverse if-converter and the surrounding
scheduling infrastructure to further enhance performance. In
addition, the framework provides an important mechanism to
undo the negative effects of overly aggressive transformations at
schedule time. With such a backup mechanism, unique opportunities
are introduced for the aggressive use and transformation of
the predicate representation early in the compilation process.
Acknowledgments
The authors would like to thank John Gyllenhaal, Teresa
Johnson, Brian Deitrich, Daniel Connors, John Sias, Kevin
Crozier and all the members of the IMPACT compiler team for
their support, comments, and suggestions. This research has
been supported by the National Science Foundation (NSF) under
grant CCR-9629948, Intel Corporation, Advanced Micro De-
vices, Hewlett-Packard, SUN Microsystems, and NCR. Additional
support was provided by an Intel Foundation Fellowship.
--R
"A study of branch prediction strategies,"
"Two-level adaptive training branch predic- tion,"
"Conversion of control dependence to data dependence,"
"On predicated execution,"
Modulo Scheduling with Isomorphic Control Trans- formations
"Highly concurrent scalar processing,"
"The Cydra 5 departmental supercomputer,"
"HPL PlayDoh architecture specification: Version 1.0,"
"Guarded execution and branch prediction in dynamic ILP processors,"
"Characterizing the impact of predicated execution on branch prediction,"
"The effects of predicated execution on branch pre- diction,"
"Height reduction of control recurrences for ILP processors,"
"Overlapped loop support in the Cydra 5,"
"Effective compiler support for predicated execution using the hyperblock,"
"A comparison of full and partial predicated execution support for ILP processors,"
"Reverse if- conversion,"
"Analysis techniques for predicated code,"
"Global predicate analysis and its application to register allocation,"
"Partial dead code elimina- tion,"
"Ar- chitectural support for compiler-synthesized dynamic branch prediction strategies: Rationale and initial results,"
"The Superblock: An effective technique for VLIW and superscalar compilation,"
--TR
Highly concurrent scalar processing
The Cydra 5 Departmental Supercomputer
Overlapped loop support in the Cydra 5
Two-level adaptive training branch prediction
Effective compiler support for predicated execution using the hyperblock
Reverse If-Conversion
The superblock
Partial dead code elimination
Guarded execution and branch prediction in dynamic ILP processors
Height reduction of control recurrences for ILP processors
The effects of predicated execution on branch prediction
Characterizing the impact of predicated execution on branch prediction
Modulo scheduling with isomorphic control transformations
A comparison of full and partial predicated execution support for ILP processors
Analysis techniques for predicated code
Global predicate analysis and its application to register allocation
Conversion of control dependence to data dependence
A study of branch prediction strategies
Architectural Support for Compiler-Synthesized Dynamic Branch Prediction Strategies
--CTR
Hyesoon Kim , Jos A. Joao , Onur Mutlu , Yale N. Patt, Profile-assisted Compiler Support for Dynamic Predication in Diverge-Merge Processors, Proceedings of the International Symposium on Code Generation and Optimization, p.367-378, March 11-14, 2007
Walter Lee , Rajeev Barua , Matthew Frank , Devabhaktuni Srikrishna , Jonathan Babb , Vivek Sarkar , Saman Amarasinghe, Space-time scheduling of instruction-level parallelism on a raw machine, ACM SIGPLAN Notices, v.33 n.11, p.46-57, Nov. 1998
Eduardo Quiones , Joan-Manuel Parcerisa , Antonio Gonzalez, Selective predicate prediction for out-of-order processors, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Patrick Akl , Andreas Moshovos, BranchTap: improving performance with very few checkpoints through adaptive speculation control, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Hyesoon Kim , Jose A. Joao , Onur Mutlu , Yale N. Patt, Diverge-Merge Processor (DMP): Dynamic Predicated Execution of Complex Control-Flow Graphs Based on Frequently Executed Paths, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.53-64, December 09-13, 2006
David I. August , John W. Sias , Jean-Michel Puiatti , Scott A. Mahlke , Daniel A. Connors , Kevin M. Crozier , Wen-mei W. Hwu, The program decision logic approach to predicated execution, ACM SIGARCH Computer Architecture News, v.27 n.2, p.208-219, May 1999
Aaron Smith , Ramadass Nagarajan , Karthikeyan Sankaralingam , Robert McDonald , Doug Burger , Stephen W. Keckler , Kathryn S. McKinley, Dataflow Predication, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.89-102, December 09-13, 2006
John W. Sias , Wen-Mei W. Hwu , David I. August, Accurate and efficient predicate analysis with binary decision diagrams, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.112-123, December 2000, Monterey, California, United States
Mihai Budiu , Girish Venkataramani , Tiberiu Chelcea , Seth Copen Goldstein, Spatial computation, ACM SIGARCH Computer Architecture News, v.32 n.5, December 2004
Yuan Chou , Jason Fung , John Paul Shen, Reducing branch misprediction penalties via dynamic control independence detection, Proceedings of the 13th international conference on Supercomputing, p.109-118, June 20-25, 1999, Rhodes, Greece
Spyridon Triantafyllis , Manish Vachharajani , Neil Vachharajani , David I. August, Compiler optimization-space exploration, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
David I. August , Wen-Mei W. Hwu , Scott A. Mahlke, The Partial Reverse If-Conversion Framework for Balancing Control Flow and Predication, International Journal of Parallel Programming, v.27 n.5, p.381-423, Oct. 1999
Lori Carter , Beth Simon , Brad Calder , Larry Carter , Jeanne Ferrante, Path Analysis and Renaming for Predicated Instruction Scheduling, International Journal of Parallel Programming, v.28 n.6, p.563-588, December 2000 | conditional instructions;compiler;instruction-level parallelism;parallel architecture;schedule time;program control flow;scheduling decisions;optimising compilers;predicated instructions;if-conversion;predicated execution |
266812 | Tuning compiler optimizations for simultaneous multithreading. | Compiler optimizations are often driven by specific assumptions about the underlying architecture and implementation of the target machine. For example, when targeting shared-memory multiprocessors, parallel programs are compiled to minimize sharing, in order to decrease high-cost, inter-processor communication. This paper reexamines several compiler optimizations in the context of simultaneous multithreading (SMT), a processor architecture that issues instructions from multiple threads to the functional units each cycle. Unlike shared-memory multiprocessors, SMT provides and benefits from fine-grained sharing of processor and memory system resources; unlike current multiprocessors, SMT exposes and benefits from inter-thread instruction-level parallelism when hiding latencies. Therefore, optimizations that are appropriate for these conventional machines may be inappropriate for SMT. We revisit three optimizations in this light: loop-iteration scheduling, software speculative execution, and loop tiling. Our results show that all three optimizations should be applied differently in the context of SMT architectures: threads should be parallelized with a cyclic, rather than a blocked algorithm; non-loop programs should not be software speculated and compilers no longer need to be concerned about precisely sizing tiles to match cache sizes. By following these new guidelines compilers can generate code that improves the performance of programs executing on SMT machines. | Introduction
Compiler optimizations are typically driven by
specific assumptions about the underlying architecture
and implementation of the target machine. For example,
compilers schedule long-latency operations early to
minimize critical paths, order instructions based on the
processor's issue slot restrictions to maximize functional
unit utilization, and allocate frequently used variables to
registers to benefit from their fast access times. When
new processing paradigms change these architectural
assumptions, however, we must reevaluate machine-dependent
compiler optimizations in order to maximize
performance on the new machines.
Simultaneous multithreading (SMT) [32][31][21]
[13] is a multithreaded processor design that alters
several architectural assumptions on which compilers
have traditionally relied. On an SMT processor,
instructions from multiple threads can issue to the
functional units each cycle. To take advantage of the
simultaneous thread-issue capability, most processor
resources and all memory subsystem resources are
dynamically shared among the threads. This single
feature is responsible for performance gains of almost 2X
over wide-issue superscalars and roughly 60% over
single-chip, shared memory multiprocessors on both
multi-programmed (SPEC92, SPECint95) and parallel
(SPLASH-2, SPECfp95) workloads; SMT achieves this
improvement while limiting the slowdown of a single
executing thread to under 2% [13].
Simultaneous multithreading presents to the
compiler a different model for hiding operation latencies
and sharing code and data. Operation latencies are hidden
by instructions from all executing threads, not just by
those in the thread with the long-latency operation. In
addition, multi-thread instruction issue increases
instruction-level parallelism (ILP) to levels much higher
than can be sustained with a single thread. Both factors
suggest reconsidering uniprocessor optimizations that
Copyright 1997 IEEE. Published in the Proceedings of
Micro-30, December 1-3, 1997 in Research Triangle Park,
North Carolina. Personal use of this material is permitted.
However, permission to reprint/republish this material for
advertising or promotional purposes or for creating new
collective works for resale or redistribution to servers or
lists, or to reuse any copyrighted component of this work
in other works, must be obtained from the IEEE. Contact:
Manager, Copyrights and Permissions / IEEE Service
Center / 445 Hoes Lane / P.O. Box 1331 / Piscataway, NJ
hide latencies and expose ILP at the expense of increased
dynamic instruction counts: on an SMT the latency-hiding
benefits may not be needed, and the extra
instructions may consume resources that could be better
utilized by instructions in concurrent threads.
Because multiple threads reside within a single SMT
processor, they can cheaply share common data and incur
no penalty from false sharing. In fact, they benefit from
cross-thread spatial locality. This calls into question
compiler-driven parallelization techniques, originally
developed for distributed-memory multiprocessors, that
partition data to physically distributed threads to avoid
communication and coherence costs. On an SMT, it may
be beneficial to parallelize programs so that they process
the same or contiguous data.
This paper investigates the extent to which
simultaneous multithreading affects the use of several
compiler optimizations. In particular, we examine one
parallel technique (loop-iteration scheduling for compiler-parallelized
applications) and two optimizations that hide
memory latencies and expose instruction-level
parallelism (software speculative execution and loop
tiling). Our results prescribe a different usage of all three
optimizations when compiling for an SMT processor.
We found that, while blocked loop scheduling may
be useful for distributing data in distributed-memory
multiprocessors, cyclic iteration scheduling is more
appropriate for an SMT architecture, because it reduces
the TLB footprint of parallel applications. Since SMT
threads run on a single processor and share its memory
hierarchy, data can be shared among threads to improve
locality in memory pages.
Software speculative execution may incur additional
instruction overhead. On a conventional wide-issue
superscalar, instruction throughput is usually low enough
that these additional instructions simply consume
resources that would otherwise go unused. However, on
an SMT processor, where simultaneous, multi-thread
instruction issue increases throughput to roughly 6.2 on
an 8-wide processor, software speculative execution can
degrade performance, particularly for non-loop-based
applications.
Simultaneous multithreading also impacts loop tiling
techniques and tile size selection. SMT processors are far
less sensitive to variations in tile size than conventional
processors, which must find an appropriate balance
between large tiles with low instruction overhead and
small tiles with better cache reuse and higher hit rates.
processors eliminate this performance sweet spot
by hiding the extra misses of larger tiles with the
additional thread-level parallelism provided by
multithreading. Tiled loops on an SMT should be
decomposed so that all threads compute on the same tile,
rather than creating a separate tile for each thread, as is
done on multiprocessors. Tiling in this way raises the
performance of SMT processors with moderately-sized
memory subsystems to that of more aggressive designs.
The remainder of this paper is organized as follows.
Section 2 provides a brief description of an SMT
processor. Section 3 discusses in more detail two
architectural assumptions that are affected by
simultaneous multithreading and their ramifications on
compiler-directed loop distribution, software speculative
execution, and loop tiling. Section 4 presents our
experimental methodology. Sections 5 through 7 examine
each of the compiler optimizations, providing
experimental results and analysis. Section 8 briefly
discusses other compiler issues raised by SMT. Related
work appears in Section 9, and we conclude in Section 10.
2 The microarchitecture of a simultaneous
multithreading processor
Our SMT design is an eight-wide, out-of-order
processor with hardware contexts for eight threads. Every
cycle the instruction fetch unit fetches four instructions
from each of two threads. The fetch unit favors high
throughput threads, fetching from the two threads that
have the fewest instructions waiting to be executed. After
fetching, instructions are decoded, their registers are
renamed, and they are inserted into either the integer or
floating point instruction queues. When their operands
become available, instructions (from any thread) issue to
the functional units for execution. Finally, instructions
retire in per-thread program order.
Little of the microarchitecture needs to be
redesigned to enable or optimize simultaneous
multithreading - most components are an integral part of
any conventional, dynamically-scheduled superscalar.
The major exceptions are the larger register file (32
architectural registers per thread, plus 100 renaming
registers), two additional pipeline stages for accessing the
registers (one each for reading and writing), the
instruction fetch scheme mentioned above, and several
per-thread mechanisms, such as program counters, return
stacks, retirement and trap logic, and identifiers in the
TLB and branch target buffer. Notably missing from this
list is special per-thread hardware for scheduling
instructions onto the functional units. Instruction
scheduling is done as in a conventional, out-of-order
superscalar: instructions are issued after their operands
have been calculated or loaded from memory, without
regard to thread; the renaming hardware eliminates inter-thread
register name conflicts by mapping thread-specific
architectural registers onto the processor's physical
registers (see [31] for more details).
All large hardware data structures (caches, TLBs,
and branch prediction tables) are shared among all
threads. The additional cross-thread conflicts in the
caches and branch prediction hardware are absorbed by
SMT's enhanced latency-hiding capabilities [21], while
TLB interference can be addressed with a technique
described in Section 5.
Rethinking compiler optimizations
As explained above, simultaneous multithreading
relies on a novel feature for attaining greater processor
performance: the coupling of multithreading and wide-
instruction issue by scheduling instructions from
different threads in the same cycle. The new design
prompts us to revisit compiler optimizations that
automatically parallelize loops for enhanced memory
performance and/or increase ILP. In this section we
discuss two factors affected by SMT's unique design,
data sharing among threads and the availability of
instruction issue slots, in light of three compiler
optimizations they affect.
Inter-thread data sharing
Conventional parallelization techniques target
multiprocessors, in which threads are physically
distributed on different processors. To minimize cache
coherence and inter-processor communication overhead,
data and loop distribution techniques partition and
distribute data to match the physical topology of the
multiprocessor. Parallelizing compilers attempt to
decompose applications to minimize synchronization and
communication between loops. Typically, this is
achieved by allocating a disjoint set of data for each
processor, so that they can work independently
[34][10][7].
In contrast, on an SMT, multiple threads execute on
the same processor, affecting performance in two ways.
First, both real and false inter-thread data sharing entail
local memory accesses and incur no coherence overhead,
because of SMT's shared L1 cache. Consequently,
sharing, and even false sharing, is beneficial. Second, by
sharing data among threads, the memory footprint of a
parallel application can be reduced, resulting in better
cache and TLB behavior. Both factors suggest a loop
distribution policy that clusters, rather then separates,
data for multiple threads.
Latency-hiding capabilities and the availability
of instruction issue slots
On most workloads, wide-issue processors typically
cannot sustain high instruction throughput, because of
low instruction-level parallelism in their single, executing
thread. Compiler optimizations, such as software
speculative execution and loop tiling (or blocking), try to
increase ILP (by hiding or reducing instruction latencies,
respectively), but often with the side effect of increasing
the dynamic instruction count. Despite the additional
instructions, the optimizations are often profitable,
because the instruction overhead can be accommodated
in otherwise idle functional units.
Because it can issue instructions from multiple
threads, an SMT processor has fewer empty issue slots;
in fact, sustained instruction throughput can be rather
high, roughly 2 times greater than on a conventional
superscalar [13]. Furthermore, SMT does a better job of
hiding latencies than single-threaded processors, because
it uses instructions from one thread to mask delays in
another. In such an environment, the aforementioned
optimizations may be less useful, or even detrimental,
because the overhead instructions compete with useful
instructions for hardware resources. SMT, with its
simultaneous multithreading capabilities, naturally
tolerates high latencies without the additional instruction
overhead.
Before examining the compiler optimizations, we
describe the methodology used in the experiments. We
chose applications from the SPEC 92 [12], SPEC 95 [30]
and SPLASH-2 [35] benchmark suites (Table 2). All
programs were compiled with the Multiflow trace
scheduling compiler [22] to generate DEC Alpha object
files. Multiflow was chosen, because it generates high-quality
code, using aggressive static scheduling for wide-
issue, loop unrolling, and other ILP-exposing
optimizations. Implicitly-parallel applications (the SPEC
suites) were first parallelized by the SUIF compiler [15];
SUIF's C output was then fed to Multiflow.
A blocked loop distribution policy commonly used
for multiprocessor execution has been implemented in
SUIF; because we used applications compiled with the
latest version of SUIF [5], but did not have access to its
source, we implemented an alternative algorithm
(described in Section 5) by hand. SUIF also finds tileable
loops, determines appropriate multiprocessor-oriented
tile sizes for particular data sets and caches, and then
generates tiled code; we experimented with other tile
sizes with manual coding. Speculative execution was
enabled/disabled by modifying the Multiflow compiler's
machine description file, which specifies which
instructions can be moved speculatively by the trace
scheduler. We experimented with both statically-
generated and profile-driven traces; for the latter,
profiling information was generated by instrumenting the
applications and then executing them with a training
input data set that differs from the set used during
simulation.
The object files generated by Multiflow were linked
with our versions of the ANL [4] and SUIF runtime
libraries to create executables. Our SMT simulator
processes these unmodified Alpha executables and uses
emulation-based, instruction-level simulation to model in
detail the processor pipelines, hardware support for out-
of-order execution, and the entire memory hierarchy,
including TLB usage. The memory hierarchy in our
processor consists of two levels of cache, with sizes,
latencies, and bandwidth characteristics, as shown in
Application Data set
Instruc-
tions
simulated
f
applu 33x33x33 array, 2 iterations 272 M X X
mgrid
su2cor 16x16x16x16, vector len. 4K, 2 iterations 5.4 B X X
tomcatv 513x513 array, 5 iterations
A
-fft 64K data points
LU 512x512 matrix 431 M X
water-
nsquared
512 molecules, 3 timesteps 870 M X
water-
spatial
512 molecules, 3 timesteps 784 M X
compress train input set 64 M X
go train input set, 2stone9 700 M X
li train input set 258 M X
test input set, dhrystone 164 M X
perl train input set, scrabble 56 M X
mxm from
matrix multiply of 256x128 and 128x64
arrays
gmt from
500x500 Gaussian elimination 354 M X
adi
integration
stencil computation for solving
partial differential equations
Table
1: Benchmarks. The last three columns identify the
studies in which the applications are used.
speculative execution, and
L1 I-cache L1 D-cache L2 cache
Cache size (bytes) 128K / 32K 128K /
Line size (bytes) 64 64 64
Banks 8 8 1
Transfer time/bank 1 cycle 1 cycle 4 cycles
Cache fill time (cycles)
Latency to next level 10
Table
2: Memory hierarchy parameters. When there
is a choice of values, the first (the more aggressive) represents a
forecast for an SMT implementation roughly three years in the
future and is used in all experiments. The second set is more
typical of today's memory subsystems and is used to emulate
larger data set sizes [29]; it is used in the tiling studies only.
Table
2. We model the cache behavior, as well as bank
and bus contention. Two TLB sizes were used for the
loop distribution experiments (48 and 128 entries), to
illustrate how the performance of loop distribution
policies is sensitive to TLB size. The larger TLB
represents a probable configuration for a (future) general-purpose
SMT; the smaller is more appropriate for a less
aggressive design, such as an SMT multimedia co-
processor, where page sizes are typically in the range of
2-8MB. For both TLB sizes, misses require two full
memory accesses, incurring a 160 cycle penalty. For
branch prediction, we use a McFarling-style hybrid
predictor with a 256-entry, 4-way set-associative branch
target buffer, and an 8K entry selector that chooses
between a global history predictor (13 history bits) and a
local predictor (a 2K-entry local history table that
indexes into a 4K-entry, 2-bit local prediction table) [24].
Because of the length of the simulations, we limited
our detailed simulation results to the parallel computation
portion of the applications (the norm for simulating
parallel applications). For the initialization phases of the
applications, we used a fast simulation mode that only
simulates the caches, so that they were warm when the
main computation phases were reached. We then turned
on the detailed simulation model.
5 Loop distribution
To reduce communication and coherence overhead
in distributed-memory multiprocessors, parallelizing
compilers employ a blocked loop parallelization policy to
distribute iterations across processors. A blocked
distribution assigns each thread (processor) continuous
array data and iterations that manipulate them (Figure 1).
Figure
presents SMT speedups for applications
parallelized using a blocked distribution with two TLB
sizes. Good speedups are obtained for many applications
(as the number of threads is increased), but in the smaller
TLB the performance of several programs (hydro2d,
swim, and tomcatv) degrades with 6 or 8 threads. The 8-
thread case is particularly important, because most
applications will be parallelized to exploit all 8 hardware
contexts in an SMT. Analysis of the simulation
bottleneck metrics indicated that the slowdown was the
result of thrashing in the data TLB, as indicated by the
TLB miss rates of Table 3.
The TLB thrashing is a direct result of blocked
partitioning, which increases the total working set of an
application because threads work on disjoint data sets. In
the most severe cases, each of the 8 threads requires
many TLB entries, because loops stride through several
large arrays at once. Since the primary data sets are
usually larger than a typical 8KB page size, at least one
TLB entry is required for each array.
The swim benchmark from SPECfp95 illustrates an
extreme example. In one loop, 9 large arrays are accessed
on each iteration of the loop. When the loop is
parallelized using a blocked distribution, the data TLB
footprint is 9 arrays * 8
excluding the entries required for other data. With any
size less than 72, significant thrashing will occur
and the parallelization is not profitable.
The lesson here is that the TLB is a shared resource
that needs to be managed efficiently in an SMT. At least
three approaches can be considered: (1) using fewer than
8 threads when parallelizing, (2) increasing the data TLB
size, or (3) parallelizing loops differently.
The first alternative unnecessarily limits the use of
the thread hardware contexts, and neither exploits SMT
nor the parallel applications to their fullest potential. The
second choice incurs a cost in access time and hardware,
although with increasing chip densities, future processors
may be able to accommodate. 1 Even with larger TLBs,
1. We found that 64 entries did not solve the problem. However, a 128-
entry data TLB avoids TLB thrashing, and as Figure 2b indicates,
achieves speedups, at least for the SPECfp95 data sets.
Application
Number of threads
applu 0.7% 0.9% 1.0% 0.9% 1.0%
hydro2d 0.1% 0.1% 0.1% 0.7% 6.3%
mgrid 0.0% 0.0% 0.0% 0.0% 0.1%
su2cor 0.1% 5.2% 7.7% 6.2% 5.5%
tomcatv 0.1% 0.1% 0.1% 2.0% 10.7%
Table
3: TLB miss rates. Miss rates are shown for a
blocked distribution and a 48-entry data TLB. The bold entries
correspond to decreased performance (see Figure 2) when the
number of threads was increased.
however, it is desirable to reduce the TLB footprint on an
SMT. A true SMT workload would be multiprogrammed:
for example, multiple parallel applications could execute
together, comprising more threads than hardware
contexts. The thread scheduler could schedule all 8
threads for the first parallel application, then context
switch to run the second, and later switch back to the
first. In this type of environment it would be performance-wise
to minimize the data TLB footprint required by each
application. (As an example, the TLB footprint of a
multiprogrammed workload consisting of swim and
hydro2d would be greater than 128 entries.)
The third and most desirable solution relies on the
compiler to reduce the data TLB footprint. Rather than
distributing loop iterations in a blocked organization, it
could use a cyclic distribution to cluster the accesses of
multiple threads onto fewer pages. (With cyclic
partitioning, swim would consume 9 rather than 72 TLB
entries). Cyclic partitioning also requires less instruction
overhead in calculating array partition bounds, a non-
negligible, although much less important factor.
(Compare the blocked and cyclic loop distribution code
and data in Figure 1.)
Figure
3 illustrates the speedups attained by a cyclic
distribution over blocked, and Table 4 contains the
corresponding changes in data TLB miss rates. With the
48-entry TLB all applications did better with a cyclic
distribution. In most cases the significant decrease in data
TLB misses, coupled with the long 160 cycle TLB miss
penalty, was the major factor. Cyclic increased TLB
conflicts in tomcatv at 2 and 4 threads, but, because the
number of misses was so low, overall program
performance did not suffer. At 6 and 8 threads, tomcatv's
a) original loop
for
blocked parallelization
for
c) cyclic parallelization
for
Figure
1: A blocked and cyclic loop distribution example.
The code for an example loop nest is shown in a). When using a blocked
distribution, the code is structured as in b). The cyclic version is shown in c). On
the right, d) and e) illustrate which portions of the array are accessed by each
thread for the two policies. (For clarity, we assume 4 threads). Assume that each
row of the array is 2KB (512 double precision elements). With blocked distribution
(d), each thread accesses a different 8KB page in memory. With cyclic (e),
however, the loop is decomposed in a manner that allows all four threads to
access a single 8KB page at the same time, thus reducing the TLB footprint.
dimension
dimension
Thread 0
Thread 1
Thread 2
Thread 3
d) blocked
a) 48-entry data TLB26
Figure
2: Speedups over one thread
for blocked parallelization.
Number
of
threads
applu hydro2d mgrid su2cor swim tomcatv average1.03.0
b) 128-entry data TLB
applu hydro2d mgrid su2cor swim tomcatv average1.03.0Speedup
blocked data TLB miss rate jumped to 2% and 11%,
causing a corresponding hike in speedup for cyclic.
Absolute miss rates in the larger data TLB are low
enough (usually under 0.2%, except for applu and su2cor,
which reached 0.9%) that most changes produced little or
no benefit for cyclic. In contrast, su2cor saw degradation,
because cyclic scheduling increased loop unrolling
instruction overhead. This performance degradation was
not seen with the smaller TLB size, because cyclic's
improved TLB hit rate offset the overhead.
Mgrid saw a large performance improvement for
both TLB sizes, because of a reduction in dynamic
instruction count. As Figures 1b and 1c illustrate, cyclic
parallelization requires fewer computations and no long-latency
divide.
In summary, these results suggest using a cyclic loop
distribution for SMT, rather than the traditional blocked
distribution. For parallel applications with large data
footprints, cyclic distribution increased program
speedups. (We saw speedups as high as 4.1, even with
the smallish SPECfp95 reference data sets.) For
applications with smaller data footprints, cyclic broke
even. Only in one application, where there was an odd
interaction with the loop unrolling factor, did cyclic
worsen performance.
In a multiprocessor of SMT processors, a cyclic
distribution would still be appropriate within each node.
Application
48-entry TLB 128-entry TLB
Number of threads Number of threads
applu 0% 50% 58% 53% 15% 0% 91% 98% 85% 69%
hydro2d 0% 0% 14% 91% 99% 0% 0% 0% 0% 14%
mgrid 0% 0% 0% 0% 50% 0% 0% 0% 0% 0%
su2cor 14% 99% 99% 99% 97% 0% 0% 98% 91% 94%
tomcatv 0% -60% -60% 96% 99% 0% -60% -60% -60% -60%
Table
4: Improvement (decrease) in TLB miss
rates of cyclic distribution over blocked.
3.5 4.1
applu hydro2d mgrid su2cor swim tomcatv mean1.0Speedup
versus
blocked
b) 128-entry data TLB
applu hydro2d mgrid su2cor swim tomcatv mean1.0Speedup
versus
blocked
a) 48-entry data TLB
thread
4 thread
6 thread
8 thread
Figure
3: Speedup attained by cyclic over blocked parallelization. For each application, the execution time for
blocked is normalized to 1.0 for all numbers of threads. Thus, each bar compares the speedup for cyclic over blocked with the same
number of threads.
A hybrid parallelization policy might be desirable,
though, with a blocked distribution across processors to
minimize inter-processor communication.
6 Software speculative execution
Today's optimizing compilers rely on aggressive
code scheduling to hide instruction latencies. In global
scheduling techniques, such as trace scheduling [22] or
hyperblock scheduling [23], instructions from a predicted
branch path may be moved above a conditional branch,
so that their execution becomes speculative. If at runtime,
the other branch path is taken, then the speculative
instructions are useless and potentially waste processor
resources.
On in-order superscalars or VLIW machines,
software speculation is necessary, because the hardware
provides no scheduling assistance. On an SMT processor
(whose execution core is an out-of-order superscalar), not
only are instructions dynamically scheduled and
speculatively executed by the hardware, but
multithreading is also used to hide latencies. (As the
number of SMT threads is increased, instruction
throughput also increases.) Therefore, the latency-hiding
benefits of software speculative execution may be needed
less, or even be unnecessary, and the additional
instruction overhead introduced by incorrect speculations
may degrade performance.
Our experiments were designed to evaluate the
appropriateness of software speculative execution for an
SMT processor. The results highlight two factors that
determine its effectiveness for SMT: static branch
prediction accuracy and instruction throughput.
Correctly-speculated instructions have no instruction
overhead; incorrectly-speculated instructions, however,
add to the dynamic instruction count. Therefore,
speculative execution is more beneficial for applications
that have high speculation accuracy, e.g., loop-based
programs with either profile-driven or state-of-the-art
static branch prediction.
Table
5 compares the dynamic instruction counts
between (profile-driven) 2 speculative and non-speculative
versions of our applications. Small increases
in the dynamic instruction count indicate that the
compiler (with the assistance of profiling information)
has been able to accurately predict which paths will be
executed. 3 Consequently, speculation may incur no
penalties. Higher increases in dynamic instruction count,
on the other hand, mean wrong-path speculations, and a
probable loss in SMT performance.
While instruction overhead influences the
effectiveness of speculation, it is not the only factor. The
level of instruction throughput in programs without
speculation is also important, because it determines how
easily speculative overhead can be absorbed. With
sufficient instruction issue bandwidth (low IPC),
incorrect speculations may cause no harm; with higher
2. We used profile-driven speculation to provide a best-case comparison to
SMT. Without profiles, more mispredictions would have occurred and
more overhead instructions would have been generated. Consequently,
software speculation would have worse performance than we report,
making its absence appear even more beneficial for SMT.
3. All the SPECfp95 programs, radix from SPLASH-2, and compress
from SPECint95, are loop-based; all have small increases in dynamic
instruction count with speculation.
increase SPECint95
increase SPLASH-2
increase
applu 2.1% compress 2.9% fft 13.7%
hydro2d 1.9% go 12.6% LU 12.5%
mgrid 0.5% li 7.3% radix 0.0%
su2cor 0.1% m88ksim 4.0% water-nsquared 3.0%
tomcatv 1.2%
Table
5: Percentage increase in dynamic
instruction count due to profile-driven software
speculative execution. Data are shown for 8 threads. (One
thread numbers were identical or very close). Applications in bold
have high speculative instruction overhead and high IPC without
speculation; those in italics have only the former.
no
spec SPECint95 spec
no
spec SPLASH-2 spec
no
spec
applu 4.9 4.7 compress 4.1 4.0 fft 6.0 6.4
hydro2d 5.6 5.4 go 2.4 2.3 LU 6.7 6.8
mgrid 7.2 7.1 li 4.5 4.6 radix 5.4 5.4
su2cor 6.1 6.0 m88ksim 4.2 4.1 water-
nsquared
6.4 6.1
tomcatv 6.2 5.9 spatial
Table
Throughput (instructions per cycle) with
and without profile-driven software speculation
for 8 threads. Programs in bold have high IPC without
speculation, plus high speculation overhead; those in italics have
only the former.
per-thread ILP or more threads, software speculation
should be less profitable, because incorrectly-speculated
instructions are more likely to compete with useful
instructions for processor resources (in particular, fetch
bandwidth and functional unit issue). Table 6 contains
the instruction throughput for each of the applications.
For some programs IPC is higher with software
speculation, indicating some degree of absorption of the
speculation overhead. In others, it is lower, because of
additional hardware resource conflicts, most notably L1
cache misses.
Speculative instruction overhead (related to static
branch prediction accuracy) and instruction throughput
together explain the speedups (or lack thereof) illustrated
in
Figure
4. When both factors were high (the non-loop-
based fft, li, and LU), speedups without software
speculation were greatest, ranging up to 22%. 4 If one
factor was low or only moderate, speedups were minimal
or nonexistent (the SPECfp95 applications, radix and
water-nsquared had only high IPC; go, m88ksim and perl
had only speculation overhead). 5 Without either factor,
software speculation helped performance, and for the
same reasons it benefits other architectures - it hid
latencies and executed the speculative instructions in
4. For these applications (and a few others as well), as more threads are
used, the advantage of turning off speculation generally becomes even
larger. Additional threads provide more parallelism, and therefore, speculative
instructions are more likely to compete with useful instructions for
processor resources.
applu hydro2d tomcatv mgrid su2cor swim0.5Speedup
over
speculation
a) SPECfp95
thread
4 thread
6 thread
8 thread
LU go fft li
perl
over
speculation b) SPLASH2 and SPEC95 int
nsquared spatial
Figure
4: Speedups of applications executing
without software speculation over with
speculation (speculative execution cycles / no
speculation execution cycles). Bars that are greater
than 1.0 indicate that no speculation is better.
otherwise idle functional units.
The bottom line is that, while loop-based
applications should be compiled with software
speculative execution, non-loop applications should be
compiled without it. Doing so either improves SMT
program performance or maintains its current level -
performance is never hurt. 6
7 Loop tiling
In order to improve cache behavior, loops can be
tiled to take advantage of data reuse. In this section, we
examine two tiling issues: tile size selection and the
distribution of tiles to threads.
If the tile size is chosen appropriately, the reduction
in average memory access time more than compensates
for the tiling overhead instructions [20][11][6]. (The code
in Figures 6b and 6c illustrates the source of this
overhead). On an SMT, however, tiling may be less
beneficial. First, SMT's enhanced latency-hiding
capabilities may render tiling unnecessary. Second, the
additional tiling instructions may increase execution
time, given SMT's higher (multithreaded) throughput.
(These are the same factors that influence whether to
software speculate.)
To address these issues, we examined tileable loop
nests with different memory access characteristics,
executing on an SMT processor. The benefits of tiling
vary when the size of the cache is changed. Smaller
caches require smaller tiles, which naturally introduce
more instruction overhead. On the other hand, smaller
tiles also produce lower average memory latencies - i.e.,
fewer conflict misses - so the latency reducing benefit of
tiling is better. We therefore varied tile sizes to measure
the performance impact of a range of tiling overhead. We
also simulated two memory hierarchies to gauge the
interaction between cache size, memory latency and tile
size. The larger memory configuration represents a
probable SMT memory subsystem for machines in
production approximately 3 years in the future (see
Section 4). The other configuration is smaller, modeling
today's memory hierarchies, and is designed to provide a
more appropriate ratio between data set and cache size,
modeling loops with larger, i.e., more realistic, data sets
than those in our benchmarks. For these experiments,
each thread was given a separate tile (the tiling norm).
Figure
5 presents the performance (total execution
cycles, average memory access time, and dynamic
instruction count for a range of tile sizes and the larger
memory configuration) of an 8-thread SMT execution of
each application and compares it to a single-thread run
5. Even though it has few floating point computations, water-spatial had a
high IPC without speculation (6.5). Therefore the speculative instructions
bottlenecked the integer units, and execution without speculation was
more profitable.
(approximating execution on a superscalar [13]). The
results indicate that tiling is profitable on an SMT, just as
it is on conventional processors. Mxm may seem to be an
exception, since tiling brings no improvement, but it is an
exception that shows there is no harm in applying the
optimization. Programs executing on an SMT appear to
be insensitive to tile size; at almost all tile sizes
examined, SMT was able to hide memory latencies (as
indicated by the flat AMAT curves), while still absorbing
tiling overhead. Therefore SMT is less dependent on
static algorithms to determine optimal tile sizes for
particular caches and working sets. In contrast,
conventional processors are more likely to have a tile size
sweet spot. Even with out-of-order execution, modern
processors, as well as alternative single-die processor
architectures, lack sufficient latency-hiding abilities;
consequently, they require more exact tile size
calculations from the compiler.
Tile size is also not a performance determinant with
the less aggressive memory subsystem (results not
shown), indicating that tiling on SMT is robust across
6. Keep in mind that had we speculated without run-time support (the pro-
filing), the relative benefit of no speculation (versus speculation) would
have been higher. For example, at 8 threads water-nsquared breaks even
with profile-driven speculation; however, relying only on Multiflow's
static branch prediction gives no speculation a slight edge, with a
speedup of 1.1. Nevertheless, the general conclusions still hold: both
good branch prediction and low multi-thread IPC are needed for software
speculation to benefit applications executing on an SMT.
Figure
5: Tiling results with the larger memory
subsystem and separate tiles/thread. All the
horizontal axes are tile size. A tile size of 0 means no tiling; sizes
greater than 0 are one dimension of the tile, measured in array
elements. The vertical axes are metrics for evaluating tiling:
dynamic instruction count, total execution cycles and AMAT.
mxm
Dynamic instruction count in millions
Total execution cycles in millions
Average memory access time in cycles (AMAT)
Total execution cycles in millions
Average memory access time in cycles (AMAT)
adi
8 threads
gmt
8 threads
memory hierarchies (or, alternatively, a range of data set
sizes). Execution time is, of course, higher, because
performance is more dependent on AMAT parameters,
rather than tiling overhead. Only adi became slightly less
tolerant of tile size changes. At the largest tile size
measured (32x32), its AMAT increased sharply, because
of inter-thread interference in the small cache. For this
loop nest, either tiles should be sized so that all fit in the
cache, or an alternative tiling technique (described
below) should be used.
The second loop tiling issue is the distribution of
tiles to threads. When parallelizing loops for
multiprocessors, a different tile is allocated to each
processor (thread) to maximize reuse and reduce inter-processor
communication. On an SMT, however, tiling in
this manner could be detrimental. Private, per-thread tiles
discourage inter-thread tile sharing and increase the total-
thread tile footprint on the single-processor SMT (the
same factors that make blocked loop iteration scheduling
inappropriate for SMT).
Rather than giving each thread its own tile (called
blocked tiling), a single tile can be shared by all threads,
and loop iterations can be distributed cyclically across
the threads (cyclic tiling). (See Figure 6 for a code
explanation of blocked and cyclic tiling, and Figure 7 for
the effect on the per-thread data layout).
Because the tile is shared, cyclic tiling can be
optimized by increasing the tile size to reduce overhead
7c). With larger tiles, cyclic tiling can drop
execution times of applications executing on small
memory SMTs closer to that of SMTs with more
aggressive memory hierarchies. (Or, put another way, the
performance of programs with large data sets can
a) original loop
for
blocked tiling
for (jT=lb; jT <= ub; jT+=jTsize)
for
for (iT=1; iT < M; iT+=iTsize)
for (j=jT; j <= min(N,jT+jTsize-1);j++)
for (k=max(1, kT);
for (i=iT; i <= min(M,iT+iTsize-1);i++)
c) cyclic tiling
for
for
for
for (k=max(1, kT);
for
Figure
Code for blocked and cyclic
versions of a tiled loop nest.
approach those with smaller.) For example, Figure 8c
illustrates that with larger tile sizes (greater than 8 array
elements per dimension) cyclic tiling reduced mxm's
AMAT enough to decrease average execution time on the
smaller cache hierarchy by 51% (compare to blocked in
Figure 8b) and to within 35% of blocked tiling on a
memory subsystem several times the size (Figure 8a).
Only at the very smallest tile size did an increase in tiling
overhead overwhelm SMT's ability to hide memory
latency.
Cyclic tiling is still appropriate for a multiprocessor
of SMTs. A hierarchical [8] or hybrid tiling approach
might be most effective. Cyclic tiling could be used to
maximize locality in each processor, while blocked tiling
could distribute tiles across processors to minimize inter-processor
Thread 1
Thread 2
Thread 3
dimension
c) optimized cyclic0011
a) blocked223344
dimension
b) cyclic21i dimension
dimension
Figure
7: A comparison of blocked and cyclic tiling
techniques for multiple threads. The blocked tiling is
shown in a). Each tile is a 4x4 array of elements. The numbers
represent the order in which tiles are accessed by each thread. For
cyclic tiling, each tile is still a 4x4 array, but now the tile is shared by
all threads. In this example, each thread gets one row of the tile, as
shown in b). With cyclic tiling, each thread works on a smaller chunk
of data at a time, so the tiling overhead is greater. In c), the tile size
is increased to 8x8 to reduce the overhead. Within each tile, each
thread is responsible for 16 of the elements, as in the original
blocked example.
Total execution time in millions of cycles
Average memory access time in cycles (AMAT)
Dynamic instruction count in millions
b) c)
a)
Figure
8: Tiling performance of 8-thread mxm.
Tile sizes are along the x-axis. Results are shown for a)
blocked tiling and the larger memory subsystem, b)
blocked tiling with the smaller memory subsystem, and
c) cyclic tiling, also with the smaller memory subsystem.
8 Other compiler optimizations
In addition to the optimizations studied in this paper,
compiler-directed prefetching, predicated execution and
software pipelining should also be re-evaluated in the
context of an SMT processor.
On a conventional processor, compiler-directed
prefetching [26] can be useful for tolerating memory
latencies, as long as prefetch overhead (due to prefetch
instructions, additional memory bandwidth, and/or cache
interference) is minimal. On an SMT, this overhead is
more detrimental: it interferes not only with the thread
doing the prefetching, but also competes with other
threads.
Predicated execution [23][16][28] is an architectural
model in which instruction execution can be guarded by
boolean predicates that determine whether an instruction
should be executed or nullified. Compilers can then use
if-conversion [2] to transform control dependences into
data dependences, thereby exposing more ILP. Like
software speculative execution, aggressive predication
can incur additional instruction overhead by executing
instructions that are either nullified or produce results
that are never used.
Software pipelining [9][27][18][1] improves
instruction scheduling by overlapping the execution of
multiple loop iterations. Rather than pipelining loops,
SMT can execute them in parallel in separate hardware
contexts. Doing so alleviates the increased register
pressure normally associated with software pipelining.
Multithreading could also be combined with software
pipelining if necessary.
Most of the optimizations discussed in this paper
were originally designed to increase single-thread ILP.
While intra-thread parallelism is still important on an
SMT processor, simultaneous multithreading relies on
multiple threads to provide useful parallelism, and
throughput often becomes a more important performance
metric. SMT raises the issue of compiling for throughput
or for a single-thread. For example, from the perspective
of a single running thread, these optimizations, as
traditionally applied, may be desirable to reduce the
thread's running time. But from a global perspective,
greater throughput (and therefore more useful work) can
be achieved by limiting the amount of speculative work.
9 Related work
The three compiler optimizations discussed in this
paper have been widely investigated in non-SMT
architectures. Loop iteration scheduling for shared-memory
multiprocessors has been evaluated by Wolf and
Lam [34], Carr, McKinley, and Tseng [7], Anderson,
Amarasinghe, and Lam [3], and Cierniak and Li [10],
among others. These studies focus on scheduling to
minimize communication and synchronization overhead;
all restructured loops and data layout to improve access
locality for each processor. In particular, Anderson et al.,
discuss the blocked and cyclic mapping schemes, and
present a heuristic for choosing between them.
Global scheduling optimizations, like trace
scheduling [22], superblocks [25] and hyperblocks [23],
allow code motion (including speculative motion) across
basic blocks, thereby exposing more ILP for statically-scheduled
VLIWs and wide-issue superscalars. In their
study on ILP limits, Lam and Wilson [19] found that
speculation provides greater speedups on loop-based
numeric applications than on non-numeric codes, but
their study did not include the effects of wrong-path
instructions.
Previous work in code transformation for improved
locality has proposed various frameworks and algorithms
for selecting and applying a range of loop
transformations [14][33][6][17][34][7]. These studies
illustrate the effectiveness of tiling and also propose
other loop transformations for enabling better tiling.
Lam, Rothberg, and Wolf [20], Coleman and McKinley
[11], and Carr et al., [6] show that application
performance is sensitive to the tile size, and present
techniques for selecting tile sizes based on problem-size
and cache parameters, rather than targeting a fixed-size
or fixed-cache occupancy.
Conclusions
This paper has examined compiler optimizations in
the context of a simultaneous multithreading architecture.
An SMT architecture differs from previous parallel
architectures in several significant ways. First, SMT
threads share processor and memory system resources of
a single processor on a fine-grained basis, even within a
single cycle. Optimizations for an SMT should therefore
seek to benefit from this fine-grained sharing, rather than
avoiding it, as is done on conventional shared-memory
multiprocessors. Second, SMT hides intra-thread
latencies by using instructions from other active threads;
optimizations that expose ILP may not be needed. Third,
instruction throughput on an SMT is high; therefore
optimizations that increase instruction count may degrade
performance.
An effective compilation strategy for simultaneous
multithreading processors must recognize these unique
characteristics. Our results show specific cases where an
SMT processor can benefit from changing the compiler
optimization strategy. In particular, we showed that (1)
cyclic iteration scheduling (as opposed to blocked
scheduling) is more appropriate for an SMT, because of
its ability to reduce the TLB footprint; (2) software
speculative execution can be bad for an SMT, because it
decreases useful instruction throughput; (3) loop tiling
algorithms can be less concerned with determining the
exact tile size, because SMT performance is less sensitive
to tile size; and (4) loop tiling to increase, rather than
reduce, inter-thread tile sharing, is more appropriate for
an SMT, because it increases the benefit of sharing
memory system resources.
Acknowledgments
We would like to thank John O'Donnell of Equator
Technologies, Inc. and Tryggve Fossum of Digital
Equipment Corp. for the source to the Alpha AXP
version of the Multiflow compiler; and Jennifer
Anderson of the DEC Western Research Laboratory for
providing us with SUIF-parallelized copies of the
benchmarks. We also would like to thank
Jeffrey Dean of DEC WRL and the referees, whose
comments helped improve this paper. This research was
supported by the Washington Technology Center, NSF
grants MIP-9632977, CCR-9200832, and CCR-9632769,
DARPA grant F30602-97-2-0226, ONR grants N00014-
92-J-1395 and N00014-94-1-1136, DEC WRL, and a
fellowship from Intel Corporation.
--R
Optimal loop parallelization.
Conversion of control dependence to data dependence.
Data and computation transformations for multiprocessors.
Portable Programs for Parallel Processors.
Compiler blockability of numerical algo- rithms
Compiler optimizations for improving data locality.
Hierarchical tiling for improved superscalar performance.
An approach to scientific array processing: The architectural design of the AP-120B/FPS-164 family
Unifying data and control transformations for distributed shared-memory machines
Tile size selection using cache organization and data layout.
New CPU benchmark suites from SPEC.
Simultaneous multithreading: A platform for next-generation processors
Strategies for cache and local memory management by global program transformation.
Maximizing multiprocessor performance with the SUIF compiler.
Highly concurrent scalar processing.
Maximizing loop parallelism and improving data locality via loop fusion and distribution.
Software pipelining: An effective scheduling technique for VLIW machines.
Limits of control flow on parallelism.
The cache performance and optimizations of blocked algorithms.
Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading
The Multiflow trace scheduling compiler.
Effective compiler support for predicated execution using the hyperblock.
Combining branch predictors.
The superblock: An effective technique for VLIW and superscalar compilation.
Design and evaluation of a compiler algorithm for prefetching.
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing.
The Cydra 5 departmental supercomputer.
Scaling parallel programs for multiprocessors: Methodology and examples.
Exploiting choice: Instruction fetch and issue on an implementable simultaneous multithreading processor.
Simultaneous multi- threading: Maximizing on-chip parallelism
A data locality optimizing algorithm.
A loop transformation theory and an algorithm to maximize parallelism.
The SPLASH-2 programs: Characterization and methodological considerations
--TR
Highly concurrent scalar processing
Strategies for cache and local memory management by global program transformation
Optimal loop parallelization
Software pipelining: an effective scheduling technique for VLIW machines
The Cydra 5 Departmental Supercomputer
The cache performance and optimizations of blocked algorithms
A data locality optimizing algorithm
New CPU benchmark suites from SPEC
Limits of control flow on parallelism
Design and evaluation of a compiler algorithm for prefetching
Effective compiler support for predicated execution using the hyperblock
Compiler blockability of numerical algorithms
The multiflow trace scheduling compiler
The superblock
Compiler optimizations for improving data locality
Unifying data and control transformations for distributed shared-memory machines
Tile size selection using cache organization and data layout
The SPLASH-2 programs
Simultaneous multithreading
Exploiting choice
Compiler-directed page coloring for multiprocessors
Converting thread-level parallelism to instruction-level parallelism via simultaneous multithreading
Conversion of control dependence to data dependence
Portable Programs for Parallel Processors
Scaling Parallel Programs for Multiprocessors
Maximizing Multiprocessor Performance with the SUIF Compiler
Simultaneous Multithreading
A Loop Transformation Theory and an Algorithm to Maximize Parallelism
Hierarchical tiling for improved superscalar performance
Maximizing Loop Parallelism and Improving Data Locality via Loop Fusion and Distribution
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing
--CTR
Mark N. Yankelevsky , Constantine D. Polychronopoulos, -coral: a multigrain, multithreaded processor architecture, Proceedings of the 15th international conference on Supercomputing, p.358-367, June 2001, Sorrento, Italy
Nicholas Mitchell , Larry Carter , Jeanne Ferrante , Dean Tullsen, ILP versus TLP on SMT, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.37-es, November 14-19, 1999, Portland, Oregon, United States
Jack L. Lo , Luiz Andr Barroso , Susan J. Eggers , Kourosh Gharachorloo , Henry M. Levy , Sujay S. Parekh, An analysis of database workload performance on simultaneous multithreaded processors, ACM SIGARCH Computer Architecture News, v.26 n.3, p.39-50, June 1998
Alex Settle , Joshua Kihm , Andrew Janiszewski , Dan Connors, Architectural Support for Enhanced SMT Job Scheduling, Proceedings of the 13th International Conference on Parallel Architectures and Compilation Techniques, p.63-73, September 29-October 03, 2004
Evangelia Athanasaki , Nikos Anastopoulos , Kornilios Kourtis , Nectarios Koziris, Exploring the performance limits of simultaneous multithreading for memory intensive applications, The Journal of Supercomputing, v.44 n.1, p.64-97, April 2008
Gary M. Zoppetti , Gagan Agrawal , Lori Pollock , Jose Nelson Amaral , Xinan Tang , Guang Gao, Automatic compiler techniques for thread coarsening for multithreaded architectures, Proceedings of the 14th international conference on Supercomputing, p.306-315, May 08-11, 2000, Santa Fe, New Mexico, United States
Steven Swanson , Luke K. McDowell , Michael M. Swift , Susan J. Eggers , Henry M. Levy, An evaluation of speculative instruction execution on simultaneous multithreaded processors, ACM Transactions on Computer Systems (TOCS), v.21 n.3, p.314-340, August
Calin Cacaval , David A. Padua, Estimating cache misses and locality using stack distances, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
James Burns , Jean-Luc Gaudiot, SMT Layout Overhead and Scalability, IEEE Transactions on Parallel and Distributed Systems, v.13 n.2, p.142-155, February 2002
Joshua A. Redstone , Susan J. Eggers , Henry M. Levy, An analysis of operating system behavior on a simultaneous multithreaded architecture, ACM SIGPLAN Notices, v.35 n.11, p.245-256, Nov. 2000
Joshua A. Redstone , Susan J. Eggers , Henry M. Levy, An analysis of operating system behavior on a simultaneous multithreaded architecture, ACM SIGARCH Computer Architecture News, v.28 n.5, p.245-256, Dec. 2000
Luke K. McDowell , Susan J. Eggers , Steven D. Gribble, Improving server software support for simultaneous multithreaded processors, ACM SIGPLAN Notices, v.38 n.10, October | simultaneous multithreading;compiler optimizations;processor architecture;software speculative execution;performance;loop-iteration scheduling;parallel architecture;cache size;inter-processor communication;memory system resources;latency hiding;parallel programs;optimising compilers;shared-memory multiprocessors;loop tiling;fine-grained sharing;instructions;cyclic algorithm;inter-thread instruction-level parallelism |
266814 | Trace processors. | Traces are dynamic instruction sequences constructed and cached by hardware. A microarchitecture organized around traces is presented as a means for efficiently executing many instructions per cycle. Trace processors exploit both control flow and data flow hierarchy to overcome complexity and architectural limitations of conventional superscalar processors by (1) distributing execution resources based on trace boundaries and (2) applying control and data prediction at the trace level rather than individual branches or instructions. Three sets of experiments using the SPECInt95 benchmarks are presented. (i) A detailed evaluation of trace processor configurations: the results affirm that significant instruction-level parallelism can be exploited in integer programs (2 to 6 instructions per cycle). We also isolate the impact of distributed resources, and quantify the value of successively doubling the number of distributed elements. (ii) A trace processor with data prediction applied to inter-trace dependences: potential performance improvement with perfect prediction is around 45% for all benchmarks. With realistic prediction, gcc achieves an actual improvement of 10%. (iii) Evaluation of aggressive control flow: some benchmarks benefit from control independence by as much as 10%. | Introduction
Improvements in processor performance come about
in two ways - advances in semiconductor technology and
advances in processor microarchitecture. To sustain the
historic rate of increase in computing power, it is important
for both kinds of advances to continue. It is almost
certain that clock frequencies will continue to increase.
The microarchitectural challenge is to issue many instructions
per cycle and to do so efficiently. We argue that a
conventional superscalar microarchitecture cannot meet
this challenge due to its complexity - its inefficient
approach to multiple instruction issue - and due to its
architectural limitations on ILP - its inability to extract
sufficient parallelism from sequential programs.
In going from today's modest issue rates to 12- or 16-
way issue, superscalar processors face complexity at all
phases of instruction processing. Instruction fetch band-width
is limited by frequent branches. Instruction dis-
patch, register renaming in particular, requires
increasingly complex dependence checking among all
instructions being dispatched. It is not clear that wide
instruction issue from a large pool of instruction buffers or
full result bypassing among functional units is feasible
with a very fast clock.
Even if a wide superscalar processor could efficiently
exploit ILP, it still has fundamental limitations in finding
the parallelism. These architectural limitations are due to
the handling of control, data, and memory dependences.
The purpose of this paper is to advocate a next generation
microarchitecture that addresses both complexity
and architectural limitations. The development of this
microarchitecture brings together concepts from a significant
body of research targeting these issues and fills in
some gaps to give a more complete and cohesive picture.
Our primary contribution is evaluating the performance
potential that this microarchitecture offers.
1.1 Trace processor microarchitecture
The proposed microarchitecture (Figure 1) is organized
around traces. In this context, a trace is a dynamic
sequence of instructions captured and stored by hard-
ware. The primary constraint on a trace is a hardware-
determined maximum length, but there may be a number
of other implementation-dependent constraints. Traces are
built as the program executes, and are stored in a trace
cache [1][2]. Using traces leads to interesting possibilities
that are revealed by the following trace properties:
. A trace can contain any number and type of control
transfer instructions, that is, any number of implicit
control predictions.
This property suggests the unit of control prediction
should be a trace, not individual control transfer
instructions. A next-trace predictor [3] can make predictions
at the trace level, effectively ignoring the
embedded control flow in a trace.
. A trace uses and produces register values that are
either live-on-entry, entirely local, or live-on-exit
[4][5]. These are referred to as live-ins, locals, and
live-outs, respectively.
This property suggests a hierarchical register file
implementation: a local register file per trace for holding
values produced and consumed solely within a
trace, and a global register file for holding values that
are live between traces. The distinction between local
dependences within a trace and global dependences
between traces also suggests implementing a distributed
instruction window based on trace boundaries.
The result is a processor composed of processing elements
(PE), each having the organization of a small-scale
superscalar processor. Each PE has (1) enough instruction
buffer space to hold an entire trace, (2) multiple dedicated
functional units, (3) a dedicated local register file for holding
local values, and (4) a copy of the global register file.
Figure
1. A trace processor.
1.1.1 Hierarchy: overcoming complexity
An organization based on traces reduces complexity
by taking advantage of hierarchy. There is a control flow
hierarchy - the processor sequences through the program
at the level of traces, and contained within traces is a finer
granularity of control flow. There is also a value hierarchy
- global and local values - that enables the processor to
efficiently distribute execution resources. With hierarchy
we overcome complexity at all phases of processing:
. Instruction predicting traces, multiple
branches are implicitly predicted - a simpler alternative
to brute-force extensions of single-branch predictors.
Together the trace cache and trace predictor offer a
solution to instruction fetch complexity.
. Instruction dispatch: Because a trace is given a local
register file that is not affected by other traces, local
registers can be pre-renamed in the trace cache [4][5].
Pre-renaming by definition eliminates the need for
dependence checking among instructions being dis-
patched, because locals represent all dependences
entirely contained within a trace. Only live-ins and
live-outs go through global renaming at trace dispatch,
thereby reducing bandwidth pressure to register maps
and the free-list.
. Instruction issue: By distributing the instruction window
among smaller trace-sized windows, the instruction
issue logic is no longer centralized. Furthermore,
each PE has fewer internal result buses, and thus a
given instruction monitors fewer result tag buses.
. Result bypassing: Full bypassing of local values
among functional units within a PE is now feasible,
despite a possibly longer latency for bypassing global
values between PEs.
. Register file: The size and bandwidth requirements of
the global register file are reduced because it does not
hold local values. Sufficient read port bandwidth is
achieved by having copies in each PE. Write ports
cannot be handled this way because live-outs must be
broadcast to all copies of the global file; however, write
bandwidth is reduced by eliminating local value traffic.
. Instruction retirement: Retirement is the "dual" of dispatch
in that physical registers are returned to the free-
list. Free-list update bandwidth is reduced because
only live-outs are mapped to physical registers.
1.1.2 Speculation: exposing ILP
To alleviate the limitations imposed by control, data,
and memory dependences, the processor employs aggressive
speculation.
Control flow prediction at the granularity of traces can
yield as good or better overall branch prediction accuracy
than many aggressive single-branch predictors [3].
Value prediction [6][7] is used to relax the data dependence
constraints among instructions. Rather than predict
source or destination values of all instructions, we limit
value predictions to live-ins of traces. Limiting predictions
to a critical subset of values imposes structure on
value prediction; predicting live-ins is particularly appealing
because it enables traces to execute independently.
Memory speculation is performed in two ways. First,
all load and store addresses are predicted at dispatch time.
Second, we employ memory dependence speculation -
loads issue as if there are no prior stores, and disambiguation
occurs after the fact via a distributed mechanism.
1.1.3 Handling misspeculation: selective reissuing
Because of the pervasiveness of speculation, handling
of misspeculation must fundamentally change. Misspeculation
is traditionally viewed as an uncommon event and is
treated accordingly: a misprediction represents a barrier
for subsequent computation. However, data misspeculation
in particular should be viewed as a normal aspect of
computation.
Data misspeculation may be caused by a mispredicted
source register value, a mispredicted address, or a memory
Preprocess
Trace
Construct
Trace
Instruction
Branch
Predict
Cache
Global
Registers
Live-in
Value
Predict
Trace
Cache
Reorder
Buffer
segment
per trace
Next
Trace
Predict Maps
Rename
Global
Registers
Predicted
Issue Buffers
Registers
Units
Processing Element 1
Processing Element 2
Processing Element 3
Processing Element 0
Speculative State
Data Cache
dependence violation. If an instruction detects a mispre-
diction, it will reissue with new values for its operands. A
new value is produced and propagated to dependent
instructions, which will in turn reissue, and so on. Only
instructions along the dependence chain reissue. The
mechanism for selective reissuing is simple because it is in
fact the existing issue mechanism.
Selective reissuing due to control misprediction,
while more involved, is also discussed and the performance
improvement is evaluated for trace processors.
1.2 Prior work
This paper draws from significant bodies of work that
either efficiently exploit ILP via distribution and hierarchy,
expose ILP via aggressive speculation, or do both. For the
most part, this body of research focuses on hardware-
intensive approaches to ILP.
Work in the area of multiscalar processors [8][9] first
recognized the complexity of implementing wide instruction
issue in the context of centralized resources. The
result is an interesting combination of compiler and hard-
ware. The compiler divides a sequential program into
tasks, each task containing arbitrary control flow. Tasks,
like traces, imply a hierarchy for both control flow and val-
ues. Execution resources are distributed among multiple
processing elements and allocated at task granularity. At
run-time tasks are predicted and scheduled onto the PEs,
and both control and data dependences are enforced by the
hardware (with aid from the compiler in the case of register
dependences).
Multiscalar processors have several characteristics in
common with trace processors. Distributing the instruction
window and register file solves instruction issue and
register file complexity. Mechanisms for multiple flows of
control not only avoid instruction fetch and dispatch com-
plexity, but also exploit control independence. Because
tasks are neither scheduled by the compiler nor guaranteed
to be parallel, these processors demonstrate aggressive
control speculation [10] and memory dependence speculation
[8][11].
More recently, other microarchitectures have been
proposed that address the complexity of superscalar pro-
cessors. The trace window organization proposed in [4] is
the basis for the microarchitecture presented here. Con-
ceivably, other register file and memory organizations
could be superimposed on this organization; e.g. the original
multiscalar distributed register file [12], or the distributed
speculative-versioning cache [13].
So far we have discussed microarchitectures that distribute
the instruction window based on task or trace
boundaries. Dependence-based clustering is an interesting
alternative [14][15]. Similar to trace processors, the window
and execution resources are distributed among multiple
smaller clusters. However, instructions are dispatched
to clusters based on dependences, not based on proximity
in the dynamic instruction stream as is the case with
traces. Instructions are steered to clusters so as to localize
dependences within a cluster, and minimize dependences
between clusters.
Early work [16] proposed the fill-unit for constructing
and reusing larger units of execution other than individual
instructions, a concept very relevant to next generation
processors. This and subsequent research [17][18] emphasize
atomicity, which allows for unconstrained instruction
preprocessing and code scheduling.
Recent work in value prediction and instruction collapsing
[6][7] address the limits of true data dependences
on ILP. These works propose exposing more ILP by predicting
addresses and register values, as well as collapsing
instructions for execution in combined functional units.
1.3 Paper overview
In Section 2 we describe the microarchitecture in
detail, including the frontend, the value predictor, the processing
element, and the mechanisms for handling mis-
speculation. Section 3 describes the performance
evaluation method. Primary performance results, including
a comparison with superscalar, are presented in
Section 4, followed by results with value prediction in
Section 5 and a study of control flow in Section 6.
2. Microarchitecture of the trace processor
2.1 Instruction supply
A trace is uniquely identified by the addresses of all
its instructions. Of course this sequence of addresses can
be encoded in a more compact form, for example, starting
addresses of all basic blocks, or trace starting address plus
branch directions. Regardless of how a trace is identified,
trace ids and derivatives of these trace ids are used to
sequence through the program.
The shaded region in Figure 2 shows the fast-path of
instruction fetch: the next-trace predictor [3], the trace
cache, and sequencing logic to coordinate the datapath.
The trace predictor outputs a primary trace id and one
alternate trace id prediction in case the primary one turns
out to be incorrect (one could use more alternates, but with
diminishing returns). The sequencer applies some hash
function on the bits of the predicted trace id to form an
index into the trace cache. The trace cache supplies the
trace id (equivalent of a cache tag) of the trace cached at
that location, which is compared against the full predicted
trace id to determine if there is a hit. In the best case, the
predicted trace is both cached and correct.
If the predicted trace misses in the cache, a trace is
constructed by the slow-path sequencer (non-shaded path
in
Figure
2). The predicted trace id encodes the instructions
to be fetched from the instruction cache, so the
sequencer uses the trace id directly instead of the conventional
branch predictor.
The execution engine returns actual branch outcomes.
If the predicted trace is partially or completely incorrect,
an alternate trace id that is consistent with the known
branch outcomes can be used to try a different trace (trace
cache hit) or build the trace remainder (trace cache miss).
If alternate ids prove insufficient, the slow-path sequencer
forms the trace using the conventional branch predictor
and actual branch outcomes.
Figure
2. Frontend of the trace processor.
2.1.1 Trace selection
An interesting aspect of trace construction is the algorithm
used to delineate traces, or trace selection. The
obvious trace selection decisions involve either stopping at
or embedding various types of control instructions: call
directs, call indirects, jump indirects, and returns. Other
heuristics may stop at loop branches, ensure that traces
end on basic block boundaries, embed leaf functions,
embed unique call sites, or enhance control independence.
Trace selection decisions affect instruction fetch band-
width, PE utilization, load balance between PEs, trace
cache hit rate, and trace prediction accuracy - all of which
strongly influence overall performance. Often, targeting
trace selection for one factor negatively impacts another
factor. We have not studied this issue extensively. Unless
otherwise stated, the trace selection we use is: (1) stop at a
maximum of 16 instructions, or (2) stop at any call indi-
rect, jump indirect, or return instruction.
2.1.2 Trace preprocessing
Traces can be preprocessed prior to being stored in the
trace cache. Our processor model requires pre-renaming
information in the trace cache. Register operands are
marked as local or global, and locals are pre-renamed to
the local register file [4]. Although not done here, preprocessing
might also include instruction scheduling [17],
storing information along with the trace to set up the reorder
buffer quickly at dispatch time, or collapsing dependent
instructions across basic block boundaries [7].
2.1.3 Trace cache performance
In this section we present miss rates for different trace
cache configurations. The miss rates are measured by running
through the dynamic instruction stream, dividing it
into traces based on the trace selection algorithm, and
looking up the successive trace ids in the cache. We only
include graphs for go and gcc. Compress fits entirely
within a 16K direct mapped trace cache; jpeg and xlisp
show under 4% miss rates for a 32K direct mapped cache.
There are two sets of curves, for two different trace
selection algorithms. Each set shows miss rates for 1-way
through 8-way associativity, with total size in kilobytes
(instruction storage only) along the x-axis. The top four
curves are for the default trace selection (Section 2.1.1).
The bottom four curves, labeled with 'S' in the key, add
two more stopping constraints: stop at call directs and stop
at loop branches. Default trace selection gives average
trace lengths of 14.8 for go and 13.9 for gcc. The more
constraining trace selection gives smaller average trace
lengths - 11.8 for go and 10.9 for gcc - but the advantage is
much lower miss rates for both benchmarks. For go in
particular, the miss rate is 14% with constrained selection
and a 128kB trace cache, down from 34%.
Figure
3. Trace cache miss rates.
2.1.4 Trace predictor
The core of the trace predictor is a correlated predictor
that uses the history of previous traces. The previous
few trace ids are hashed down to fewer bits and placed in a
shift register, forming a path history. The path history is
used to form an index into a prediction table with 2
entries. Each table entry consists of the predicted trace id,
hit
logic
fast-path
sequencer
Trace
Next
sequencer
slow-path
cached trace id
alternate
primary Function
Hash
predicted
trace id
trace
pred's
branch
pred
control
targets,
branch Construct
Trace Trace
new trace, trace id
Preprocess
outcomes from execution
instr. block (optional path)
miss
rate
GCC
DM
2-way
4-way
8-way
DM (S)
2-way (S)
4-way (S)
8-way (S)1030507048
miss
rate
size (K-bytes)
GO
DM
2-way
4-way
8-way
DM (S)
2-way (S)
4-way (S)
8-way (S)
an alternate trace id, and a 2-bit saturating counter for
guiding replacement. The accuracy of the correlated predictor
is aided by having a return history stack. For each
call within a trace the path history register is copied and
pushed onto a hardware stack. When a trace ends in a
return, a path history value is popped from the stack and
used to replace all but the newest trace in the path history
register.
To reduce the impact of cold-starts and aliasing, the
correlated predictor is augmented with a second smaller
predictor that uses only the previous trace id, not the
whole path history. Each table entry in the correlated predictor
is tagged with the last trace to use the entry. If the
tag matches then the correlated predictor is used, otherwise
the simpler predictor is used. If the counter of the
simpler predictor is saturated its prediction is automatically
used, regardless of the tag. A more detailed treatment
of the trace predictor can be found in [3].
2.1.5 Trace characteristics
Important trace characteristics are shown in Table 1.
Average trace length affects instruction supply bandwidth
and instruction buffer utilization - the larger the better. We
want a significant fraction of values to be locals, to reduce
global communication. Note that the ratio of locals to
live-outs tends to be higher for longer traces, as observed
in [4].
2.2 Value predictor
The value predictor is context-based and organized as
a two-level table. Context-based predictors learn values
that follow a particular sequence of previous values [19].
The first-level table is indexed by a unique prediction id,
derived from the trace id. A given trace has multiple prediction
ids, one per live-in or address in the trace. An
entry in the first-level table contains a pattern that is a
hashed version of the previous 4 data values of the item
being predicted. The pattern from the first-level table is
used to look up a 32-bit data prediction in the second-level
table. Replacement is guided by a 3-bit saturating counter
associated with each entry in the second-level table.
The predictor also assigns a confidence level to predictions
[20][6]. Instructions issue with predicted values
only if the predictions have a high level of confidence.
The confidence mechanism is a 2-bit saturating counter
stored with each pattern in the first-level table.
The table sizes used in this study are very large in
order to explore the potential of such an
entries in the first-level, 2 20 entries in the second-level.
Accuracy of context-based value prediction is affected by
timing of updates, which we accurately model. A detailed
treatment of the value predictor can be found in [19].
2.3 Distributed instruction window
2.3.1 Trace dispatch
The dispatch stage performs decode, renaming, and
value predictions. Live-in registers of the trace are
renamed by looking up physical registers in the global register
rename map. Independently, live-out registers
receive new names from the free-list of physical registers,
and the global register rename map is updated to reflect
these new names. The dispatch stage looks up value predictions
for all live-in registers and all load/store addresses
in the trace.
The dispatch stage also performs functions related to
precise exceptions, similar to the mechanisms used in conventional
processors. First, a segment of the reorder buffer
(ROB) is reserved by the trace. Enough information is
placed in the segment to allow backing up rename map
state instruction by instruction. Second, a snapshot of the
register rename map is saved at trace boundaries, to allow
backing up state to the point of an exception quickly. The
processor first backs up to the snapshot corresponding to
the excepting trace, and then information in that trace's
ROB segment is used to back up to the excepting instruc-
tion. The ROB is also used to free physical registers.
2.3.2 Freeing and allocating PEs
For precise interrupts, traces must be retired in-order,
requiring the ROB to maintain state for all outstanding
traces. The number of outstanding traces is therefore limited
by the number of ROB segments (assuming there are
enough physical registers to match).
Because ROB state handles trace retirement, a PE can
be freed as soon as its trace has completed execution.
Unfortunately, knowing when a trace is "completed" is not
simple, due to our misspeculation model (a mechanism is
needed to determine when an instruction has issued for the
last time). Consequently, a PE is freed when its trace is
retired, because retirement guarantees instructions are
done. This is a lower performance solution because it
effectively arranges the PEs in a circular queue, just like
segments of the ROB. PEs are therefore allocated and
freed in a fifo fashion, even though they might in fact complete
out-of-order.
Table
1. Trace characteristics.
statistic comp gcc go jpeg xlisp
trace length (inst) 14.5 13.9 14.8 15.8 12.4
live-ins 5.2 4.3 5.0 6.8 4.1
live-outs 6.2 5.6 5.8 6.4 5.1
locals 5.6 3.8 5.9 7.1 2.6
loads 2.6 3.6 3.1 2.9 3.7
stores 0.9 1.9 1.0 1.2 2.2
cond. branches 2.1 2.1 1.8 1.0 1.9
control inst 2.9 2.8 2.2 1.3 2.9
trace misp. rate 17.1% 8.1% 15.7% 6.6% 6.9%
2.3.3 Processing element detail
The datapath for a processing element is shown in
Figure
4. There are enough instruction buffers to hold the
largest trace. For loads and stores, the address generation
part is treated as an instruction in these buffers. The memory
access part of loads and stores, along with address pre-
dictions, are placed into load/store buffers. Included with
the load/store buffers is validation hardware for validating
predicted addresses against the result of address computa-
tions. A set of registers is provided to hold live-in predic-
tions, along with hardware for validating the predictions
against values received from other traces.
Figure
4. Processing element detail.
Instructions are ready to issue when all operands
become available. Live-in values may already be available
in the global register file. If not, live-ins may have been
predicted and the values are buffered with the instruction.
In any case, instructions continually monitor result buses
for the arrival of new values for its operands; memory
access operations continually monitor the arrival of new
computed addresses.
Associated with each functional unit is a queue for
holding completed results, so that instruction issue is not
blocked if results are held up waiting for a result bus. The
result may be a local value only, a live-out value only, or
both; in any case, local and global result buses are arbitrated
separately. Global result buses correspond directly
with write ports to the global register file, and are characterized
by two numbers: the total number of buses and the
number of buses for which each PE can arbitrate in a
cycle. The memory buses correspond directly with cache
ports, and are characterized similarly.
2.4 Misspeculation
In Section 1.1.3 we introduced a model for handling
misspeculation. Instructions reissue when they detect
mispredictions; selectively reissuing dependent instructions
follows naturally by the receipt of new values. This
section describes the mechanisms for detecting various
kinds of mispredictions.
2.4.1 Mispredicted live-ins
Live-in predictions are validated when the computed
values are seen on the global result buses. Instruction
buffers and store buffers monitor comparator outputs corresponding
to live-in predictions they used. If the predicted
and computed values match, instructions that used
the predicted live-in are not reissued. Otherwise they do
reissue, in which case the validation latency appears as a
misprediction penalty, because in the absence of speculation
the instructions may have issued sooner [6].
2.4.2 Memory dependence and address misspeculation
The memory system (Figure 5) is composed of a data
cache and a structure for buffering speculative store data,
distributed load/store buffers in the PEs, and memory
buses connecting them.
When a trace is dispatched, all of its loads and stores
are assigned sequence numbers. Sequence numbers indicate
the program order of all memory operations in the
window. The store buffer may be organized like a cache
[21], or integrated as part of the data cache itself [13]. The
important thing is that some mechanism must exist for
buffering speculative memory state and maintaining multiple
versions of memory locations [13].
Figure
5. Abstraction of the memory system.
Handling stores:
. When a store first issues to memory, it supplies its
address, sequence number, and data on one of the
memory buses. The store buffer creates a new version
for that memory address and buffers the data. Multiple
versions are ordered via store sequence numbers.
. If a store must reissue because it has received a new
computed address, it must first "undo" its state at the
old address, and then perform the store to the new
address. Both transactions are initiated by the store
sending its old address, new address, sequence number,
and data on one of the memory buses.
load/store buf
FU
FU
tags values
tags values
Global
buffers
issue issue
File
Reg
File
Reg
local
result buses
agen results
store data
FU
FU
global
result
buses (D$ ports)
buses
addr/data
live-in
value
pred's
. PEs .
global memory buses
dataN
data2
address
multiple versions
. If a store must reissue because it has received new data,
it simply performs again to the same address.
Handling loads:
. A load sends its address and sequence number to the
memory system. If multiple versions of the location
exist, the memory system knows which version to
return by comparing sequence numbers. The load is
supplied both the data and the sequence number of the
store which created the version. Thus, loads maintain
two sequence numbers: its own and that of the data.
. If a load must reissue because it has received a new
computed address, it simply reissues to the memory
system as before with the new address.
. Loads snoop all store traffic (store address and
sequence number). A load must reissue if (1) the store
address matches the load address, (2) the store
sequence number is less than that of the load, and (3)
the store sequence number is greater than that of the
load data. This is a true memory dependence violation.
The load must also reissue if the store sequence number
simply matches the sequence number of the load
data. This takes care of the store changing its address
(a false dependence had existed between the store and
load) or sending out new data.
2.4.3 Concerning control misprediction
In a conventional processor, a branch misprediction
causes all subsequent instructions to be squashed. How-
ever, only those instructions that are control-dependent on
the misprediction need to be squashed [22]. At least three
things must be done to exploit control independence in the
trace processor. First, only those instructions fetched from
the wrong path must be replaced. Second, although not all
instructions are necessarily replaced, those that remain
may still have to reissue because of changes in register
dependences. Third, stores on the wrong path must
"undo" their speculative state in the memory system.
Trace re-predict sequences are used for selective control
squashes. After detecting a control misprediction
within a trace, traces in subsequent PEs are not automatically
squashed. Instead, the frontend re-predicts and re-
dispatches traces. The resident trace id is checked against
the re-predicted trace id; if there is a partial (i.e. common
prefix) or total match, only instructions beyond the match
need to be replaced. For those not replaced, register
dependences may have changed. So the global register
names of each instruction in the resident trace are checked
against those in the new trace; instructions that differ pick
up the new names. Reissuing will follow from the existing
issue mechanism. This approach treats instructions just
like data values in that they are individually "validated".
If a store is removed from the window and it has
already performed, it must first issue to memory again, but
only an undo transaction is performed as described in the
previous section. Loads that were false-dependent on the
store will snoop the store and thus reissue. Removing or
adding loads/stores to the window does not cause
sequence number problems if sequence numbering is
based on {PE #, buffer #}.
3. Simulation environment
Detailed simulation is used to evaluate the performance
of trace processors. For comparison, superscalar
processors are also simulated. The simulator was developed
using the simplescalar simulation platform [23].
This platform uses a MIPS-like instruction set (no delayed
branches) and comes with a gcc-based compiler to create
binaries.
Table
2. Fixed parameters and benchmarks.
Our primary simulator uses a hybrid trace-driven and
execution-driven approach. The control flow of the simulator
is trace-driven. A functional simulator generates the
true dynamic instruction stream, and this stream feeds the
processor simulator. The processor does not explicitly
fetch instructions down the wrong path due to control mis-
speculation. The data flow of the simulator is completely
latency 2 cycles (fetch
trace predictor see Section 2.1.4
value predictor see Section 2.2
trace cache
total traces = 2048
trace line size = 16 instructions
branch pred. predictor = 64k 2-bit sat counters
no tags, 1-bit hyst.
instr. cache
line instructions
2-way interleaved
miss
global phys regs unlimited
functional units n symmetric, fully-pipelined FUs (for n-way issue)
memory unlimited speculative store buffering
D$ line size = 64 bytes
unlimited outstanding misses
exec. latencies address generation
memory
integer ALU operations
latencies
validation latency
*Compress was modified to make only a single pass.
benchmark input dataset instr count
compress * 400000 e 2231 104 million
gcc -O3 genrecog.i 117 million
go 9 9 133 million
ijpeg vigo.ppm 166 million
queens 7 202 million
execution-driven. This is essential for accurately portraying
the data misspeculation model. For example, instructions
reissue due to receiving new values, loads may
pollute the data cache (or prefetch) with wrong addresses,
extra bandwidth demand is observed on result buses, etc.
As stated above, the default control sequencing model
is that control mispredictions cause no new traces to be
brought into the processor until resolved. A more aggressive
control flow model is investigated in Section 6. To
accurately measure selective control squashing, a fully
execution-driven simulator was developed - it is considerably
slower than the hybrid approach and so is only
applied in Section 6.
The simulator faithfully models the frontend, PE, and
memory system depicted in Figures 2, 4, and 5, respec-
tively. Model parameters that are invariant for simulations
are shown in Table 2. The table also lists the five SPEC95
integer benchmarks used, along with input datasets and
dynamic instruction counts for the full runs.
4. Primary performance results
In this section, performance for both trace processors
and conventional superscalar processors is presented,
without data prediction. The only difference between the
superscalar simulator and the trace processor simulator is
that superscalar has a centralized execution engine. All
other hardware such as the frontend and memory system
are identical. Thus, superscalar has the benefit of the trace
predictor, trace cache, reduced rename complexity, and
selective reissuing due to memory dependence violations.
The experiments (Table 3) focus on three parameters:
window size, issue width, and global result bypass latency.
Trace processors with 4, 8, and 16 PEs are simulated.
Each PE can hold a trace of 16 instructions. Conventional
superscalar processors with window sizes ranging from 16
to 256 instructions are simulated. Curves are labeled with
the model name - T for trace and SS for superscalar - followed
by the total window size. Points on the same curve
represent varying issue widths; in the case of trace proces-
sors, this is the aggregate issue width. Trace processor
curves come in pairs - one assumes no extra latency (0) for
bypassing values between processing elements, and the
other assumes one extra cycle (1). Superscalar is not
penalized - all results are bypassed as if they are locals.
Fetch bandwidth, local and global result buses, and cache
buses are chosen to be commensurate with the configura-
tion's issue width and window size. Note that the window
size refers to all in-flight instructions, including those that
have completed but not yet retired. The retire width equals
the issue width for superscalar; an entire trace can be
retired in the trace processor.
From the graphs in Figure 6, the first encouraging
result is that all benchmarks show ILP that increases
nicely with window size and issue bandwidth, for both
processor models. Except for compress and go, which
exhibit poor control prediction accuracy, absolute IPC is
also encouraging. For example, large trace processors
average 3.0 to 3.7 instructions per cycle for gcc.
The extra cycle for transferring global values has a
noticeable performance impact, on the order of 5% to
10%. Also notice crossover points in the trace processor
curves. For example, "T-64 2-way per PE" performs better
than "T-128 1-way per PE". At low issue widths, it is
better to augment issue capability than add more PEs.
Superscalar versus Trace Processors
One way to compare the two processors is to fix total
window size and total issue width. That is, if we have a
centralized instruction window, what happens when we
divide the window into equal partitions and dedicate an
equal slice of the issue bandwidth to each partition? This
question focuses on the effect of load balance. Because of
load balance, IPC for the trace processor can only
approach that of the superscalar processor. For example,
consider two points from the gcc, jpeg, and xlisp graphs:
"T-128 (0) 2-way per PE" and "SS-128 16-way". The IPC
performance differs by 16% to 19% - the effect of load
balance. (Also, in the trace processor, instruction buffers
are underutilized due to small traces, and instruction buffers
are freed in discrete chunks.)
The above comparison is rather arbitrary because it
suggests an equivalence based on total issue width. In
reality, total issue width lacks meaning in the context of
trace processors. What we really need is a comparison
method based on equivalent complexity, i.e. equal clock
cycle. One measure of complexity is issue complexity,
which goes as the product of window size and issue width
[15]. With this equivalence measure, comparing the two
previous datapoints is invalid because the superscalar processor
is much more complex (128x16 versus 16x2).
Unfortunately, there is not one measure of processor
complexity. So instead we take an approach that demonstrates
the philosophy of next generation processors:
1. Take a small-scale superscalar processor and maximize
its performance.
2. Use this highly-optimized processor and replicate it,
taking advantage of a hierarchical organization.
In other words, the goal is to increase IPC while keeping
clock cycle optimal and constant. The last graph in
Figure
6 interprets data for gcc with this philosophy. Suppose
we start with a superscalar processor with a
instruction window and 1, 2, or 4-way issue as a basic
building block, and successively add more copies to form
a trace processor. Assume that the only penalty for having
more than one PE is the extra cycle to bypass values
between PEs; this might account for global result bus
Figure
6. Trace processor and superscalar processor IPC. Note that the bottom-right graph is derived
from the adjacent graph, as indicated by the arrow; it interprets the same data in a different way.
Table
3. Experiments.
PE window size -or- fetch/dispatch b/w
number of PEs 4 8
issue b/w per PE
total issue b/w 4 8
local result buses
global result buses 4 4
global buses that can be used by a
cache buses that can be used by a
IPC
total issue width
T-128 (1)
IPC
total issue width
T-128 (1)
IPC
total issue width
T-128 (1)
IPC
total issue width
T-128 (1)
IPC
total issue width
T-128 (1)
IPC
number of PEs
GCC
1-way issue
2-way issue
4-way issue
arbitration, global bypass latency, and extra wakeup logic
for snooping global result tag buses. One might then
roughly argue that complexity, i.e. cycle time, remains relatively
constant with successively more PEs. For gcc, 4-
way issue per PE, IPC progressively improves by 58% (1
to 4 PEs), 19% (4 to 8 PEs), and 12% (8 to 16 PEs).
5. Adding structured value prediction
This section presents actual and potential performance
results for a trace processor configuration using
data prediction. We chose a configuration with 8 PEs,
each having 4-way issue, and 1 extra cycle for bypassing
values over the global result buses.
The experiments explore real and perfect value pre-
diction, confidence, and timing of value predictor updates.
There are 7 bars for each benchmark in Figure 7. The first
four bars are for real value prediction, and are labeled R/*/
*, the first R denoting real prediction. The second qualifier
denotes the confidence model: R (real confidence)
says we use predictions that are marked confident by the
predictor, O (oracle confidence) says we only use a value
if it is correctly predicted. The third qualifier denotes slow
(S) or immediate (I) updates of the predictor. The last
three bars in the graph are for perfect value/no address prediction
(PV), perfect address/no value prediction (PA), and
perfect value and address prediction (P).
Figure
7. Performance with data prediction.
From the rightmost bar (perfect prediction), the potential
performance improvement for data prediction is signif-
icant, around 45% for all of the benchmarks. Three of the
benchmarks benefit twice as much from address prediction
than from value prediction, as shown by the PA/PV bars.
Despite data prediction's potential, only two of the
benchmarks - gcc and xlisp - show noticeable actual
improvement, about 10%. However, keep in mind that
data value prediction is at a stage where significant engineering
remains to be done. There is still much to be
explored in the predictor design space.
Although gcc and xlisp show good improvement, it is
less than a quarter of the potential improvement. For gcc,
the confidence mechanism is not at fault; oracle confidence
only makes up for about 7% of the difference. Xlisp
on the other hand shows that with oracle confidence, over
half the potential improvement is achievable. Unfortu-
nately, xlisp performs poorly in terms of letting incorrect
predictions pass as confident.
The first graph in Figure 8 shows the number of
instruction squashes as a fraction of dynamic instruction
count. The first two bars are without value prediction, the
last two bars are with value prediction (denoted by V).
The first two bars show the number of loads squashed by
stores (dependence misspeculation) and the total number
of squashes that result due to a cascade of reissued instruc-
tions. Live-in and address misspeculation add to these
totals in the last two bars. Xlisp's 30% reissue rate
explains why it shows less performance improvement than
gcc despite higher accuracy. The second graph shows the
distribution of the number of times an instruction issues
while it is in the window.
Figure
8. Statistics for selective reissuing.
6. Aggressive control flow
This section evaluates the performance of a trace processor
capable of exploiting control independence. Only
instructions that are control dependent on a branch misprediction
are squashed, and instructions whose register
dependences change are selectively reissued as described
in Section 2.4.3. Accurate measurement of this control
flow model requires fetching instructions down wrong
paths, primarily to capture data dependences that may
exist on such paths. For this reason we use a fully execution-driven
simulator in this section.
For a trace processor with 16 PEs, 4-way issue per
PE, two of the benchmarks show a significant improvement
in IPC: compress (13%) and jpeg (9%). These
benchmarks frequently traverse small loops containing
simple, reconvergent control flow. Also important are
small loops with a few and fixed number of iterations,
allowing the processor to capture traces beyond the loop.
Value Prediction Performance Results
5.0%
10.0%
15.0%
20.0%
30.0%
40.0%
45.0%
50.0%
compress gcc go jpeg xlisp
benchmark
improvement
over
base
IPC
R/R/S
R/R/I
R/O/S
R/O/I
PA
10.0%
20.0%
30.0%
40.0%
50.0%
70.0%
80.0%
90.0%
100.0%
number of times issued
fraction
of
dynamic
instr jpeg
compress
go
gcc
xlisp
0%
5%
10%
15%
20%
30%
comp gcc go jpeg xlisp
instruction
squash
rate load squash
total
load squash (V)
total (V)
7. Conclusion
Trace processors exploit the characteristics of traces
to efficiently issue many instructions per cycle. Trace data
characteristics - local versus global values - suggest distributing
execution resources at the trace level as a way to
overcome complexity limitations. They also suggest an
interesting application of value prediction, namely prediction
of inter-trace dependences. Further, treating traces as
the unit of control prediction results in an efficient, high
accuracy control prediction model.
An initial evaluation of trace processors without value
prediction shows encouraging absolute IPC values - e.g.
gcc between 3 and 4 - reaffirming that ILP can be
exploited in large programs with complex control flow.
We have isolated the performance impact of distributing
execution resources based on trace boundaries, and demonstrated
the overall performance value of replicating fast,
small-scale ILP processors in a hierarchy.
Trace processors with structured value prediction
show promise. Although only two of the benchmarks
show noticeable performance improvement, the potential
improvement is substantial for all benchmarks, and we
feel good engineering of value prediction and confidence
mechanisms will increase the gains.
With the pervasiveness of speculation in next generation
processors, misspeculation handling becomes an
important issue. Rather than treating mispredictions as an
afterthought of speculation, we discussed how data misspeculation
can be incorporated into the existing issue
mechanism. We also discussed mechanisms for exploiting
control independence, and showed that sequential programs
may benefit.
Acknowledgments
This work was supported in part by NSF Grant MIP-
9505853 and by the U.S. Army Intelligence Center and
Fort Huachuca under Contract DABT63-95-C-0127 and
ARPA order no. D346. The views and conclusions contained
herein are those of the authors and should not be
interpreted as necessarily representing the official policies
or endorsements, either express or implied, of the U.S.
Army Intelligence Center and Fort Huachuca, or the U.S.
Government. This work is also supported by a Graduate
Fellowship from IBM.
--R
Trace cache: A low latency approach to high bandwidth instruction fetching.
Critical issues regarding the trace cache fetch mechanism.
Improving superscalar instruction dispatch and issue by exploiting dynamic code sequenc- es
Facilitating superscalar processing via a combined static/dynamic register renaming scheme.
Value Locality and Speculative Execution.
The performance potential of data dependence speculation and collapsing.
The Multiscalar Architecture.
Multiscalar pro- cessors
Control flow speculation in multiscalar processors.
Dynamic speculation and synchronization of data dependences.
The anatomy of the register file in a multiscalar processor.
Data memory alternatives for multiscalar processors.
The 21264: A superscalar alpha processor with out- of-order execution
Hardware support for large atomic units in dynamically scheduled machines.
Exploiting instruction level parallelism in processors by caching scheduled groups.
Increasing the instruction fetch rate via block-structured instruction set ar- chitectures
The predictability of data values.
Assigning confidence to conditional branch predictions.
ARB:A hardware mechanism for dynamic reordering of memory references.
Limits of control flow on paral- lelism
Evaluating future mi- croprocessors: The simplescalar toolset
--TR
Hardware support for large atomic units in dynamically scheduled machines
Limits of control flow on parallelism
The anatomy of the register file in a multiscalar processor
The multiscalar architecture
Facilitating superscalar processing via a combined static/dynamic register renaming scheme
Multiscalar processors
Trace cache
Assigning confidence to conditional branch predictions
Increasing the instruction fetch rate via block-structured instruction set architectures
The performance potential of data dependence speculation MYAMPERSANDamp; collapsing
Improving superscalar instruction dispatch and issue by exploiting dynamic code sequences
Exploiting instruction level parallelism in processors by caching scheduled groups
Dynamic speculation and synchronization of data dependences
Complexity-effective superscalar processors
Path-based next trace prediction
The predictability of data values
Value locality and speculative execution
Control Flow Speculation in Multiscalar Processors
--CTR
Lieven Eeckhout , Tom Vander Aa , Bart Goeman , Hans Vandierendonck , Rudy Lauwereins , Koen De Bosschere, Application domains for fixed-length block structured architectures, Australian Computer Science Communications, v.23 n.4, p.35-44, January 2001
Peter G. Sassone , D. Scott Wills, Scaling Up the Atlas Chip-Multiprocessor, IEEE Transactions on Computers, v.54 n.1, p.82-87, January 2005
Bing Luo , Chris Jesshope, Performance of a micro-threaded pipeline, Australian Computer Science Communications, v.24 n.3, p.83-90, January-February 2002
independence in trace processors, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.4-15, November 16-18, 1999, Haifa, Israel
Yiannakis Sazeides , James E. Smith, Limits of Data Value Predictability, International Journal of Parallel Programming, v.27 n.4, p.229-256, Aug. 1999
Haitham Akkary , Michael A. Driscoll, A dynamic multithreading processor, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.226-236, November 1998, Dallas, Texas, United States
Avinash Sodani , Gurindar S. Sohi, Understanding the differences between value prediction and instruction reuse, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.205-215, November 1998, Dallas, Texas, United States
Venkata Krishnan , Josep Torrellas, The Need for Fast Communication in Hardware-Based Speculative Chip Multiprocessors, International Journal of Parallel Programming, v.29 n.1, p.3-33, February 2001
Amirali Baniasadi, Balancing clustering-induced stalls to improve performance in clustered processors, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Sriram Vajapeyam , P. J. Joseph , Tulika Mitra, Dynamic vectorization: a mechanism for exploiting far-flung ILP in ordinary programs, ACM SIGARCH Computer Architecture News, v.27 n.2, p.16-27, May 1999
S. Subramanya Sastry , Subbarao Palacharla , James E. Smith, Exploiting idle floating-point resources for integer execution, ACM SIGPLAN Notices, v.33 n.5, p.118-129, May 1998
Venkata Krishnan , Josep Torrellas, Hardware and software support for speculative execution of sequential binaries on a chip-multiprocessor, Proceedings of the 12th international conference on Supercomputing, p.85-92, July 1998, Melbourne, Australia
Steven K. Reinhardt , Shubhendu S. Mukherjee, Transient fault detection via simultaneous multithreading, ACM SIGARCH Computer Architecture News, v.28 n.2, p.25-36, May 2000
Quinn Jacobson , James E. Smith, Trace preconstruction, ACM SIGARCH Computer Architecture News, v.28 n.2, p.37-46, May 2000
Pedro Marcuello , Jordi Tubella , Antonio Gonzlez, Value prediction for speculative multithreaded architectures, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.230-236, November 16-18, 1999, Haifa, Israel
Yiannakis Sazeides , James E. Smith, Modeling program predictability, ACM SIGARCH Computer Architecture News, v.26 n.3, p.73-84, June 1998
Sanjay Jeram Patel , Marius Evers , Yale N. Patt, Improving trace cache effectiveness with branch promotion and trace packing, ACM SIGARCH Computer Architecture News, v.26 n.3, p.262-271, June 1998
Brian Fields , Shai Rubin , Rastislav Bodk, Focusing processor policies via critical-path prediction, ACM SIGARCH Computer Architecture News, v.29 n.2, p.74-85, May 2001
Amirali Baniasadi , Andreas Moshovos, Instruction distribution heuristics for quad-cluster, dynamically-scheduled, superscalar processors, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.337-347, December 2000, Monterey, California, United States
Narayan Ranganathan , Manoj Franklin, An empirical study of decentralized ILP execution models, ACM SIGPLAN Notices, v.33 n.11, p.272-281, Nov. 1998
Ramon Canal , Antonio Gonzlez, A low-complexity issue logic, Proceedings of the 14th international conference on Supercomputing, p.327-335, May 08-11, 2000, Santa Fe, New Mexico, United States
T. N. Vijaykumar , Sridhar Gopal , James E. Smith , Gurindar Sohi, Speculative Versioning Cache, IEEE Transactions on Parallel and Distributed Systems, v.12 n.12, p.1305-1317, December 2001
Dana S. Henry , Bradley C. Kuszmaul , Gabriel H. Loh , Rahul Sami, Circuits for wide-window superscalar processors, ACM SIGARCH Computer Architecture News, v.28 n.2, p.236-247, May 2000
Artur Klauser , Abhijit Paithankar , Dirk Grunwald, Selective eager execution on the PolyPath architecture, ACM SIGARCH Computer Architecture News, v.26 n.3, p.250-259, June 1998
Ryan Rakvic , Bryan Black , John Paul Shen, Completion time multiple branch prediction for enhancing trace cache performance, ACM SIGARCH Computer Architecture News, v.28 n.2, p.47-58, May 2000
Amir Roth , Gurindar S. Sohi, Register integration: a simple and efficient implementation of squash reuse, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.223-234, December 2000, Monterey, California, United States
Bryan Black , Bohuslav Rychlik , John Paul Shen, The block-based trace cache, ACM SIGARCH Computer Architecture News, v.27 n.2, p.196-207, May 1999
Ivn Martel , Daniel Ortega , Eduard Ayguad , Mateo Valero, Increasing effective IPC by exploiting distant parallelism, Proceedings of the 13th international conference on Supercomputing, p.348-355, June 20-25, 1999, Rhodes, Greece
Young , Michael D. Smith, Better global scheduling using path profiles, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.115-123, November 1998, Dallas, Texas, United States
Ramon Canal , Joan-Manuel Parcerisa , Antonio Gonzlez, Dynamic Code Partitioning for Clustered Architectures, International Journal of Parallel Programming, v.29 n.1, p.59-79, February 2001
Efe Yardimci , Michael Franz, Dynamic parallelization and mapping of binary executables on hierarchical platforms, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Pedro Marcuello , Antonio Gonzlez , Jordi Tubella, Speculative multithreaded processors, Proceedings of the 12th international conference on Supercomputing, p.77-84, July 1998, Melbourne, Australia
Joan-Manuel Parcerisa , Antonio Gonzlez, Reducing wire delay penalty through value prediction, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.317-326, December 2000, Monterey, California, United States
Craig Zilles , Gurindar Sohi, Master/slave speculative parallelization, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Ramon Canal , Antonio Gonzlez, Reducing the complexity of the issue logic, Proceedings of the 15th international conference on Supercomputing, p.312-320, June 2001, Sorrento, Italy
Aneesh Aggarwal , Manoj Franklin, Scalability Aspects of Instruction Distribution Algorithms for Clustered Processors, IEEE Transactions on Parallel and Distributed Systems, v.16 n.10, p.944-955, October 2005
Steven E. Raasch , Nathan L. Binkert , Steven K. Reinhardt, A scalable instruction queue design using dependence chains, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Gabriel Loh, A time-stamping algorithm for efficient performance estimation of superscalar processors, ACM SIGMETRICS Performance Evaluation Review, v.29 n.1, p.72-81, June 2001
Yuan Chou , Jason Fung , John Paul Shen, Reducing branch misprediction penalties via dynamic control independence detection, Proceedings of the 13th international conference on Supercomputing, p.109-118, June 20-25, 1999, Rhodes, Greece
Freddy Gabbay , Avi Mendelson, The effect of instruction fetch bandwidth on value prediction, ACM SIGARCH Computer Architecture News, v.26 n.3, p.272-281, June 1998
Pedro Marcuello , Antonio Gonzlez , Jordi Tubella, Thread Partitioning and Value Prediction for Exploiting Speculative Thread-Level Parallelism, IEEE Transactions on Computers, v.53 n.2, p.114-125, February 2004
Aneesh Aggarwal , Manoj Franklin, Instruction Replication for Reducing Delays Due to Inter-PE Communication Latency, IEEE Transactions on Computers, v.54 n.12, p.1496-1507, December 2005
Vikas Agarwal , M. S. Hrishikesh , Stephen W. Keckler , Doug Burger, Clock rate versus IPC: the end of the road for conventional microarchitectures, ACM SIGARCH Computer Architecture News, v.28 n.2, p.248-259, May 2000
Ramadass Nagarajan , Karthikeyan Sankaralingam , Doug Burger , Stephen W. Keckler, A design space evaluation of grid processor architectures, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Antonia Zhai , Christopher B. Colohan , J. Gregory Steffan , Todd C. Mowry, Compiler Optimization of Memory-Resident Value Communication Between Speculative Threads, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.39, March 20-24, 2004, Palo Alto, California
Jeffrey Oplinger , Monica S. Lam, Enhancing software reliability with speculative threads, ACM SIGPLAN Notices, v.37 n.10, October 2002
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, Access region locality for high-bandwidth processor memory system design, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.136-146, November 16-18, 1999, Haifa, Israel
Pedro Marcuello , Antonio Gonzlez, Clustered speculative multithreaded processors, Proceedings of the 13th international conference on Supercomputing, p.365-372, June 20-25, 1999, Rhodes, Greece
Sanjay Jeram Patel , Daniel Holmes Friendly , Yale N. Patt, Evaluation of Design Options for the Trace Cache Fetch Mechanism, IEEE Transactions on Computers, v.48 n.2, p.193-204, February 1999
Brian Fahs , Satarupa Bose , Matthew Crum , Brian Slechta , Francesco Spadini , Tony Tung , Sanjay J. Patel , Steven S. Lumetta, Performance characterization of a hardware mechanism for dynamic optimization, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, Decoupling local variable accesses in a wide-issue superscalar processor, ACM SIGARCH Computer Architecture News, v.27 n.2, p.100-110, May 1999
Alvin R. Lebeck , Jinson Koppanalil , Tong Li , Jaidev Patwardhan , Eric Rotenberg, A large, fast instruction window for tolerating cache misses, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Ho-Seop Kim , James E. Smith, An instruction set and microarchitecture for instruction level distributed processing, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Mladen Berekovic , Tim Niggemeier, A distributed, simultaneously multi-threaded (SMT) processor with clustered scheduling windows for scalable DSP performance, Journal of Signal Processing Systems, v.50 n.2, p.201-229, February 2008
Venkata Krishnan , Josep Torrellas, A Chip-Multiprocessor Architecture with Speculative Multithreading, IEEE Transactions on Computers, v.48 n.9, p.866-880, September 1999
Joan-Manuel Parcerisa , Antonio Gonzalez, Improving Latency Tolerance of Multithreading through Decoupling, IEEE Transactions on Computers, v.50 n.10, p.1084-1094, October 2001
Michael Gschwind , Kemal Ebciolu , Erik Altman , Sumedh Sathaye, Binary translation and architecture convergence issues for IBM system/390, Proceedings of the 14th international conference on Supercomputing, p.336-347, May 08-11, 2000, Santa Fe, New Mexico, United States
Balasubramonian , Sandhya Dwarkadas , David H. Albonesi, Dynamically allocating processor resources between nearby and distant ILP, ACM SIGARCH Computer Architecture News, v.29 n.2, p.26-37, May 2001
Rajagopalan Desikan , Simha Sethumadhavan , Doug Burger , Stephen W. Keckler, Scalable selective re-execution for EDGE architectures, ACM SIGPLAN Notices, v.39 n.11, November 2004
Sang-Jeong Lee , Pen-Chung Yew, On Augmenting Trace Cache for High-Bandwidth Value Prediction, IEEE Transactions on Computers, v.51 n.9, p.1074-1088, September 2002
Trace Cache Microarchitecture and Evaluation, IEEE Transactions on Computers, v.48 n.2, p.111-120, February 1999
Andreas Moshovos , Gurindar S. Sohi, Reducing Memory Latency via Read-after-Read Memory Dependence Prediction, IEEE Transactions on Computers, v.51 n.3, p.313-326, March 2002
Lucian Codrescu , Steve Nugent , James Meindl , D. Scott Wills, Modeling technology impact on cluster microprocessor performance, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.11 n.5, p.909-920, October
James R. Larus, Whole program paths, ACM SIGPLAN Notices, v.34 n.5, p.259-269, May 1999
Smruti R. Sarangi , Wei Liu, Josep Torrellas , Yuanyuan Zhou, ReSlice: Selective Re-Execution of Long-Retired Misspeculated Instructions Using Forward Slicing, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.257-270, November 12-16, 2005, Barcelona, Spain
R. Gonzlez , A. Cristal , M. Pericas , M. Valero , A. Veidenbaum, An asymmetric clustered processor based on value content, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, A High-Bandwidth Memory Pipeline for Wide Issue Processors, IEEE Transactions on Computers, v.50 n.7, p.709-723, July 2001
Troy A. Johnson , Rudolf Eigenmann , T. N. Vijaykumar, Speculative thread decomposition through empirical optimization, Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, March 14-17, 2007, San Jose, California, USA
Lucian Codrescu , D. Scott Wills , James Meindl, Architecture of the Atlas Chip-Multiprocessor: Dynamically Parallelizing Irregular Applications, IEEE Transactions on Computers, v.50 n.1, p.67-82, January 2001
Joan-Manuel Parcerisa , Julio Sahuquillo , Antonio Gonzalez , Jose Duato, On-Chip Interconnects and Instruction Steering Schemes for Clustered Microarchitectures, IEEE Transactions on Parallel and Distributed Systems, v.16 n.2, p.130-144, February 2005
Michele Co , Dee A. B. Weikle , Kevin Skadron, Evaluating trace cache energy efficiency, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.4, p.450-476, December 2006
Jung Ho Ahn , Mattan Erez , William J. Dally, Tradeoff between data-, instruction-, and thread-level parallelism in stream processors, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Roni Rosner , Micha Moffie , Yiannakis Sazeides , Ronny Ronen, Selecting long atomic traces for high coverage, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Balasubramonian , Sandhya Dwarkadas , David H. Albonesi, Dynamically managing the communication-parallelism trade-off in future clustered processors, ACM SIGARCH Computer Architecture News, v.31 n.2, May
J. Gregory Steffan , Christopher Colohan , Antonia Zhai , Todd C. Mowry, The STAMPede approach to thread-level speculation, ACM Transactions on Computer Systems (TOCS), v.23 n.3, p.253-300, August 2005
A. Mahjur , A. H. Jahangir , A. H. Gholamipour, On the performance of trace locality of reference, Performance Evaluation, v.60 n.1-4, p.51-72, May 2005
Engin Ipek , Meyrem Kirman , Nevin Kirman , Jose F. Martinez, Core fusion: accommodating software diversity in chip multiprocessors, ACM SIGARCH Computer Architecture News, v.35 n.2, May 2007
Jun Yan , Wei Zhang, Hybrid multi-core architecture for boosting single-threaded performance, ACM SIGARCH Computer Architecture News, v.35 n.1, p.141-148, March 2007
Michael D. Smith, Overcoming the challenges to feedback-directed optimization (Keynote Talk), ACM SIGPLAN Notices, v.35 n.7, p.1-11, July 2000
Mladen Berekovic , Sren Moch , Peter Pirsch, A scalable, clustered SMT processor for digital signal processing, ACM SIGARCH Computer Architecture News, v.32 n.3, p.62-69, June 2004
Kevin Skadron , Pritpal S. Ahuja , Margaret Martonosi , Douglas W. Clark, Branch Prediction, Instruction-Window Size, and Cache Size: Performance Trade-Offs and Simulation Techniques, IEEE Transactions on Computers, v.48 n.11, p.1260-1281, November 1999
Theo Ungerer , Borut Robi , Jurij ilc, A survey of processors with explicit multithreading, ACM Computing Surveys (CSUR), v.35 n.1, p.29-63, March | trace cache;selective reissuing;context-based value prediction;next trace prediction;trace processors;multiscalar processors |
266816 | Out-of-order vector architectures. | Register renaming and out-of-order instruction issue are now commonly used in superscalar processors. These techniques can also be used to significant advantage in vector processors, as this paper shows. Performance is improved and available memory bandwidth is used more effectively. Using a trace driven simulation we compare a conventional vector implementation, based on the Convex C3400, with an out-of-order, register renaming, vector implementation. When the number of physical registers is above 12, out-of-order execution coupled with register renaming provides a speedup of 1.24--1.72 for realistic memory latencies. Out-of-order techniques also tolerate main memory latencies of 100 cycles with a performance degradation less than 6%. The mechanisms used for register renaming and out-of-order issue can be used to support precise interrupts -- generally a difficult problem in vector machines. When precise interrupts are implemented, there is typically less than a 10% degradation in performance. A new technique based on register renaming is targeted at dynamically eliminating spill code; this technique is shown to provide an extra speedup ranging between 1.10 and 1.20 while reducing total memory traffic by an average of 15--20%. | Introduction
Vector architectures have been used for many years
for high performance numerical applications - an area
where they still excel. The first vector machines were
supercomputers using memory-to-memory operation,
but vector machines only became commercially successful
with the addition of vector registers in the
[12]. Following the Cray-1, a number of vector
machines have been designed and sold, from supercomputers
with very high vector bandwidths [8]
to more modest mini-supercomputers. More recently,
This work was supported by the Ministry of Education of
Spain under contract 0429/95, by CIRIT grant BEAI96/II/124
and by the CEPBA.
y This work was supported in part by NSF Grant MIP-
9505853.
the value of vector architectures for desktop applications
is being recognized. In particular, many DSP
and multimedia applications - graphics, compression,
encryption - are very well suited for vector implementation
[1]. Also, research focusing on new processor-memory
organizations, such as IRAM [10], would also
benefit from vector technology.
Studies in recent years [13, 5, 11], however, have
shown performance achieved by vector architectures
on real programs falls short of what should be achieved
by considering available hardware resources. Functional
unit hazards and conflicts in the vector register
file can make vector processors stall for long periods of
time and result in latency problems similar to those in
scalar processors. Each time a vector processor stalls
and the memory port becomes idle, memory band-width
goes unused. Furthermore, latency tolerance
properties of vectors are lost: the first load instruction
at the idle memory port exposes the full memory
latency.
These results suggest a need to improve the memory
performance in vector architectures. Unfortunately,
typical hardware techniques used in scalar processors
to improve memory usage and reduce memory latency
have not always been useful in vector architectures.
For example, data caches have been studied [9, 6];
however, the results are mixed, with performance gain
or loss depending on working set sizes and the fraction
of non-unit stride memory access. Data caches have
not been put into widespread use in vector processors
(except to cache scalar data).
Dynamic instruction issue is the preferred solution
in scalar processors to attack the memory latency
problem by allowing memory reference instructions to
proceed when other instructions are waiting for memory
data. That is, memory reference instructions are
allowed to slip ahead of execution instructions. Vector
processors have not generally used dynamic instruction
issue (except in one recent design, the NEC
SX-4 [14]). The reasons are unclear. Perhaps it has
been thought that the inherent latency hiding advantages
of vectors are sufficient. Or, it is possibly because
the first successful vector machine, the Cray-
issued instructions in order, and additional innovations
in vector instruction issue were simply not pursued
Besides in-order vector instruction issue, traditional
vector machines have had a relatively small number of
vector registers (8 is typical). The limited number of
vector registers was initially the result of hardware
costs when vector register instruction sets were originally
being developed; today the small number of registers
is generally recognized as a shortcoming. Register
renaming, useful for out-of-order issue, can come
to the rescue here as well. With register renaming
more physical registers are made available, and vector
register conflicts are reduced.
Another feature of traditional vector machines is
that they have not supported virtual memory - at
least not in the fully flexible manner of most modern
scalar processors. The primary reason is the difficulty
of implementing precise interrupts for page faults - a
difficulty that arises from the very high level of concurrency
in vector machines. Once again, features for
implementingdynamic instruction issue for scalars can
be easily adapted to vectors. Register renaming and
reorder buffers allow relatively easy recovery of state
information after a fault condition has occurred.
In this paper, we show that using out-of-order issue
and register renaming techniques in a vector pro-
cessor, performance can be greatly improved. Dynamic
instruction scheduling allows memory latencies
to be overlapped more completely - and uses the valuable
memory resource more efficiently in the process.
Moreover, once renaming has been introduced into the
architecture, it enables straightforward implementations
of precise exceptions, which in turn provide an
easy way of introducing virtual memory, without much
extra hardware and without incurring a great performance
penalty. We also present a new technique
aimed at dynamically eliminating redundant loads.
Using this technique, memory traffic can be significantly
reduced and performance is further increased.
Vector Architectures and Implementation
This study is based on a traditional vector processor
and numerical applications, primarily because of the
maturity of compilers and the availability of benchmarks
and simulation tools. We feel that the general
conclusions will extend to other vector applications,
however. The renaming, out-of-order vector architecture
we propose is modeled after a Convex C3400. In
this section we describe the base C3400 architecture
and implementation (henceforth, the reference archi-
tecture), and the dynamic out-of-order vector architecture
(referred to as OOOVA).
2.1 The C3400 Reference Architecture
The Convex C3400 consists of a scalar unit and an
independent vector unit. The scalar unit executes all
instructions that involve scalar registers
isters), and issues a maximum of one instruction per
cycle. The vector unit consists of two computation
units (FU1 and FU2) and one memory accessing unit
Fetch
Decode&Rename
@
unit
Reorder
Buffer
released regs
S-regs A-regs V-regs
mask-regs
Figure
1: The Out-of-order and renaming version of
the reference vector architecture.
(MEM). The FU2 unit is a general purpose arithmetic
unit capable of executing all vector instructions. The
FU1 unit is a restricted functional unit that executes all
vector instructions except multiplication, division and
square root. Both functional units are fully pipelined.
The vector unit has 8 vector registers which hold up
to 128 elements of 64 bits each. The eight vector registers
are connected to the functional units through a restricted
crossbar. Pairs of vector registers are grouped
in a register bank and share two read ports and one
write port that links them to the functional units. The
compiler is responsible for scheduling vector instructions
and allocating vector registers so that no port
conflicts arise. The reference machine implements vector
chaining from functional units to other functional
units and to the store unit. It does not chain memory
loads to functional units, however.
2.2 The Dynamic Out-of-Order Vector
Architecture (OOOVA)
The out-of-order and renaming version of the reference
architecture, OOOVA, is shown in figure 1. It is
derived from the reference architecture by applying a
renaming technique very similar to that found in the
R10000 [16]. Instructions flow in-order through the
Fetch and Decode/Rename stages and then go to one
of the four queues present in the architecture based
on instruction type. At the rename stage, a mapping
table translates each virtual register into a physical
register. There are 4 independent mapping tables, one
for each type of register: A, S, V and mask registers.
Each mapping table has its own associated list of free
registers. When instructions are accepted into the decode
stage, a slot in the reorder buffer is also allocated.
Instructions enter and exit the reorder buffer in strict
program order. When an instruction defines a new
logical register, a physical register is taken from the
Issue RF ALU Wb
A-regs
Issue RF
S-regs
Issue RF
V-regs
Rename
Fetch
Issue RF
MEM .
@ Range
Calculation
Dependency
Calculation
Figure
2: The Out-of-order and renaming main instruction
pipelines.
list, the mapping table entry for the logical register
is updated with the new physical register number
and the old mapping is stored in the reorder buffer
slot allocated to the instruction. When the instruction
commits, the old physical register is returned to
its free list. Note that the reorder buffer only holds a
few bits to identify instructions and register names; it
never holds register values.
Main Pipelines
There are four main pipelines in the OOOVA architecture
(see fig. 2), one for each type of instruction. After
decoding and renaming, instructions wait in the four
queues shown in fig. 1. The A, S and V queues monitor
the ready status of all instructions held in the queue
slots and as soon as an instruction is ready, it is sent
to the appropriate functional unit for execution. Processing
of instructions in the M queue proceeds in two
phases. First, instructions proceed in-order through
a 3 stage pipeline comprising the Issue/Rf stage, the
range stage and the dependence stage. After they have
completed these three steps, memory instructions can
proceed out of order based on dependence information
computed and operand availability (for stores).
At the Range stage, the range of all addresses potentially
modified by a memory instruction is com-
puted. This range is used in the following stage
for run-time memory disambiguation. The range
is defined as all bytes falling between the base address
(called Range Start) and the address defined as
(called Range End), where
V L is the vector length register and V S is the vector
stride register. Note that the multiplier can be
simplified because V L \Gamma 1 is short (never more than
7 bits), and the product (V can be kept
in a non-architected register and implicitly updated
when either VL or VS is modified. In the Dependence
stage, using the Range Start/Range End addresses,
the memory instruction is compared against all previous
instructions found in the queue. Once a memory
instruction is free of any dependences, it can proceed
to issue memory requests.
Machine Parameters
Table
1 presents the latencies of the various functional
units present in the architecture. Memory latency is
not shown in the table because it will be varied. The
memory system is modeled as follows. There is a single
address bus shared by all types of memory trans-
Parameters Latency
Scal Vect
(int/fp) (int/fp)
read x-bar - 2
vector startup - (*)
add 1/2 1/2
mul 5/2 5/2
logic/shift 1/2 1/2
div 34/9 34/9
sqrt 34/9 34/9
Table
1: Functional unit latencies (in cycles) for the
two architectures.((*) 0 in OOOVA, 1 in REF)
actions (scalar/vector and load/store), and physically
separate data busses for sending and receiving data
to/from main memory. Vector load instructions (and
gather instructions) pay an initial latency and then receive
one datum from memory per cycle. Vector store
instructions do not result in observed latency We use a
value of 50 cycles as the default memory latency. Section
4.3 will present results on the effects of varying
this value.
The V register read/write ports have been modified
from the original C34 scheme. In the OOOVA, each
vector register has 1 dedicated read port and 1 dedicated
port. The original banking scheme of the
register file can not be kept because renaming shuffles
all the compiler scheduled read/write ports and,
therefore, would induce a lot of port conflicts.
All instruction queues are set at 16 slots. The
reorder buffer can hold 64 instructions. The machine
has a 64 entry BTB, where each entry has a
2-bit saturating counter for predicting the outcome of
branches. Also, an 8-deep return stack is used to predict
call/return sequences. Both scalar register files
and S) have 64 physical registers each. The mask
register file has 8 physical registers. The fetch stage,
the decode stage and all four queues only process a
maximum of 1 instruction per cycle. Committing instructions
proceeds at a faster rate, and up to 4 instructions
may commit per cycle.
Commit Strategy
For V registers we start with an aggressive implementation
where physical registers are released at the time
the vector instruction begins execution. Consider the
vector instruction: add v0,v1-?v3. At the rename
stage, v3 will be re-mapped to, say, physical register
9 (ph9), and the old mapping of v3, which was, say,
physical register 12 (ph12), will be stored in the re-order
buffer slot associated with the add instruction.
When the add instruction begins execution, we mark
the associated reorder buffer slot as ready to be com-
mitted. When the slot reaches the head of the buffer,
ph12 is released. Due to the semantics of a vector
register, when ph12 is released, it is guaranteed that
all instructions needing ph12 have begun execution at
least one cycle before. Thus, the first element of ph12
is already flowing through the register file read cross-
bar. Even if ph12 is immediately reassigned to a new
logical register and some other instruction starts writ-
#insns #ops % avg.
Program Suite S V V Vect VL
hydro2d Spec 41.5 39.2 3973.8 99.0 101
arc2d Perf. 63.3 42.9 4086.5 98.5 95
flo52 Perf. 37.7 22.8 1242.0 97.1 54
su2cor Spec 152.6 26.8 3356.8 95.7 125
bdna Perf. 239.0 19.6 1589.9 86.9 81
trfd Perf. 352.2 49.5 1095.3 75.7 22
dyfesm Perf. 236.1 33.0 696.2 74.7 21
Table
2: Basic operation counts for the Perfect Club
and Specfp92 programs (Columns 3-5 are in millions).
ing into ph12, the instructions reading ph12 are at the
very least one cycle ahead and will always read the correct
values. This type of releasing does not allow for
precise exceptions, though. Section 5 will change the
release algorithm to allow for precise exceptions.
To assess the performance benefits of out-of-order
issue and renaming in vector architectures we have
taken a trace driven approach. A subset of the Perfect
Club and Specfp92 programs is used as the benchmark
set. These programs are compiled on a Convex C3480
machine and the tool Dixie [3] is used to modify the executable
for tracing. Once the executables have been
processed by Dixie, the modified executables are run
on the Convex machine. This runs produce the desired
set of traces that accurately represent the execution of
the programs. This trace is then fed to two simulators
for the reference and OOOVA architectures.
3.1 The benchmark programs
Because we are interested in the benefits of out-of-
order issue for vector instructions, we selected benchmark
programs that are highly vectorizable. From all
programs in the Perfect and Specfp92 benchmarks we
chose the 10 programs that achieve at least 70% vec-
torization. Table 2 presents some statistics for the
selected Perfect Club and Specfp92 programs. Column
number 2 indicates to what suite each program
belongs. Next two columns present the total number
of instructions issued by the decode unit, broken
down into scalar and vector instructions. Column five
presents the number of operations performed by vector
instructions. The sixth column is the percentage
of vectorization of each program (i.e., column five divided
by the sum of columns three and five). Finally,
column seven presents the average vector length used
by vector instructions (the ratio of columns five and
four, respectively).
hydro2d10003000
Execution
cycles
dyfesm5001500
Figure
3: Functional unit usage for the reference ar-
chitecture. Each bar represents the total execution
time of a program for a given latency. Values on the
x-axis represent memory latencies in cycles.
Performance Results
4.1 Bottlenecks in the Reference Archi-
tecture
First we present an analysis of the execution of the
ten benchmark programs when run through the reference
architecture simulator.
Consider the three vector functional units of the
reference architecture (FU2, FU1 and MEM). The machine
state can be represented with a 3-tuple that
captures the individual state of each of the three units
at a given point in time. For example, the 3-tuple
represents a state where all units
are working, while represents a state where all
vector units are idle.
Figure
3 presents the execution time for two of the
ten benchmark programs (see [4] for the other 8 pro-
grams). Space limitations prevents us from providing
them all, but these two, hydro2d and dyfesm, are rep-
resentative. During an execution the programs are
in eight possible states. We have plotted the time
spent in each state for memory latencies of 1, 20, 70,
and 100 cycles. From this figure we can see that the
number of cycles where the programs proceed at peak
floating point speed (states hFU2; FU1;MEM i and
low. The number of cycles
in these states changes relatively little as the memory
latency increases, so the fraction of fully used cycles
decreases. Memory latency has a high impact on total
execution time for programs dyfesm (shown in Figure
3), and trfd and flo52 (not shown), which have
relatively small vector lengths. The effect of memory
latency can be seen by noting the increase in cycles
spent in state h ; ; i.
The sum of cycles corresponding to states where
the MEM unit is idle is quite high in all programs.
These four states
correspond to cycles where the mem-
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm2060
Idle
Memory
port% 170
Figure
4: Percentage of cycles where the memory port
was idle, for 4 different memory latencies.
ory port could potentially be used to fetch data from
memory for future vector computations. Figure 4
presents the percentage of these cycles over total execution
time. At latency 70, the port idle time ranges
between 30% and 65% of total execution time. All
benchmark programs are memory bound when run
on a single port vector machine with two functional
units. Therefore, these unused memory cycles are not
the result of a lack of load/store work to be done.
4.2 Performance of the OOOVA
In this section we present the performance of the
OOOVA and compare it with the reference archi-
tecture. We consider both overall performance in
speedup and memory port occupation.
The effects of adding out-of-order execution and renaming
to the reference architecture can be seen in
figure 5. For each program we plot the speedup over
the reference architecture when the number of physical
vector registers is varied from 9 to 64 (memory latency
is set at 50 cycles). In each graph, we show the
speedup for two OOOVA implementations: "OOOVA-
16" has length 16 instruction queues, and "OOOVA-
128" has length 128 queues. We also show the maximum
ideal speedup that can theoretically be achieved
("IDEAL", along the top of each graph). To compute
the IDEAL speedup for a program we use the total
number of cycles consumed by the most heavily used
vector unit (FU1, FU2, or MEM). Thus, in IDEAL we
essentially eliminate all data and memory dependences
from the program, and consider performance limited
only by the most saturated resource across the entire
execution.
As can be seen from figure 5, the OOOVA significantly
increases performance over the reference ma-
chine. With physical registers, the lowest speedup
is 1.24 (for tomcatv). The highest speedups are for
trfd and dyfesm (1.72 and 1.70 resp.); the remaining
programs give speedups of 1.3-1.45. For numbers of
physical registers greater than 16, additional speedups
are generally small. The largest speedup from going to
physical registers is for bdna where the additional
improvement is 8.3%. The improvement in bdna is
due to an extremely large main loop, which generates
a sequence of basic blocks with more than 800 vector
instructions. More physical registers allow it to better
match the large available ILP in these basic blocks.
On the other hand, if the number of physical vector
registers is a major concern, we observe that 12 physical
registers still give speedups of 1.63 and 1.70 for
trfd and dyfesm and that the other programs are in
the range of 1.23 to 1.38. These results suggest that
a physical vector register with as few as 12 registers
is sufficient in most cases. A file with 16 registers is
enough to sustain high performance in every case.
When we increase the depth of the instruction
queues to 128, the performance improvement is quite
small (curve "OOOVA-128"). Analysis of the programs
shows that two factors combine to prevent further
improvements when increasing the number of issue
queue slots. First, the spill code present in large
basic blocks induces a lot of memory conflicts in the
memory queue. Second, the lack of scalar registers
sometimes prevents the dynamic unrolling of enough
iterations of a vector loop to make full usage of the
memory port.
Memory
The out-of-order issue feature allows memory access
instructions to slip ahead of computation instructions,
resulting in a compaction of memory access opera-
tions. The presence of fewer wasted memory cycles
is shown in figure 6. This figure contains the number
of cycles where the address port is idle divided by the
total number of execution cycles. Bars for the reference
machine, REF, and for the out-of-order machine,
OOOVA are shown. The OOOVA machines has
physical vector registers and a memory latency of 50
cycles. With OOOVA, the fraction of idle memory cycles
is more than cut in half in most cases. For all but
two of the benchmarks, the memory port is idle less
than 20% of the time.
Resource Usage
We now consider resource usage for the OOOVA machine
and compare it with the reference machine. This
is illustrated in figure 7. The same notation as in figure
3 is used for representing the execution state. As
in the previous subsections, the OOOVA machine has
physical vector registers and memory latency is set
at 50 cycles. Figure 7 shows that the major improvement
is in state h ; ; i, which has almost disappeared.
Also, the fully-utilized state, hFU2; FU1;MEM i, is
relatively more frequent due to the benefits of out-of-
order execution. As we have already seen, the availability
of more than one memory instruction ready to
be launched in the memory queues allows for much
higher usage of the memory port.
4.3 Tolerance of Memory Latencies
One way of looking at the advantage of out-of-order
execution and register renaming is that it allows long
1.11.3
9
arc2d1.29
flo521.21.6
9
nasa71.21.6
9
trfd1.5Speedup
9
IDEAL
OOOVA-128
Figure
5: Speedup of the OOOVA over the REF architecture for different numbers of vector physical registers.
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm103050
Idle
Memory
REF
OOOVA
Figure
Percentage of idle cycles in the memory port
for the Reference architecture and the OOOVA archi-
tecture. Memory latency is 50 cycles and the vector
register file holds physical vector registers.
memory latencies to be hidden. In previous subsections
we showed the benefits of the OOOVA with a
fixed memory latency of 50 cycles. In this subsection
we consider the ability of the OOOVA machine to tolerate
main memory latencies.
Figure
8 shows the total execution time for the ten
programs when executed on the reference machine and
on the OOOVA machine for memory latencies of 1,
50, and 100 cycles. All results are for 16 physical vector
registers. As shown in the figure, the reference
machine is very sensitive to memory latency. Even
though it is a vector machine, memory latency influences
execution time considerably. On the other hand,
the OOOVA machine is much more tolerant of the in-
hydro2d dyfesm51525
Execution
cycles
Figure
7: Breakdown of the execution cycles for the
REF (left bar) and OOOVA (right bar) machines. The
OOOVA machine has 16 physical vector registers. For
both architectures, memory latency was set at 50 cycles
crease in memory latency. For most benchmarks the
performance is flat for the entire range of memory la-
tencies, from 1 to 100 cycles.
Another important point is that even at a memory
latency of 1 cycle the OOOVA machine typically
obtains speedups over the reference machine in the
range of 1.15-1.25 (and goes as high as 1.5 in the case
of dyfesm). This speedup indicates that the effects of
looking ahead in the instruction stream are good even
in the absence of long latency memory operations.
At the other end of the scale, we see that long
memory latencies can be easily tolerated using out-
of-order techniques. This indicates that the individ-
5cycles
x
cycles
x
trfd15cycles
x
dyfesm10REF
IDEAL
Figure
8: Effects of varying main memory latency for three memory models and for the 16 physical vector registers
machines.
ual memory modules in the memory system can be
slowed down (changing very expensive SRAM parts
for much cheaper DRAM parts) without significantly
degrading total throughput. This type of technology
change could have a major impact on the total cost of
the machine, which is typically dominated by the cost
of the memory subsystem.
5 Implementing Precise Traps
An important side effect of introducing register renaming
into a vector architecture is that it enables a
straightforward implementation of precise exceptions.
In turn, the availability of precise exceptions allows
the introduction of virtual memory. Virtual memory
has been implemented in vector machines [15], but
is not used in many current high performance parallel
vector processors [7]. Or, it is used in a very
restricted form, for example by locking pages containing
vector data in memory while a vector program
executes [7, 14].
The primary problem with implementing precise
page faults in a high performance vector machine is
the high number of overlapped "in-flight" operations
- in some machines there may be several hundred.
Vector register renaming provides a convenient means
for saving the large amount of machine state required
for rollback to a precise state following a page fault or
other exception. If the contents of old logical vector
registers are kept until an instruction overwriting the
logical register is known to be free of exceptions, then
the architected state can be restored if needed.
In order to implement precise traps, we introduce
two changes to the OOOVA design: first, an instruction
is allowed to commit only after it has fully completed
(as opposed to the "early" commit scheme we
have been using). Second, stores are only allowed to
execute and update memorywhen they are at the head
of the reorder buffer; that is, when they are the oldest
uncommitted instructions.
Figure
9 presents a comparison of the speedups over
the reference architecture achieved by the OOOVA
with early commit (labeled "early"), and by the
OOOVA with late commit and execution of stores
only at the head of the reorder buffer (labeled "late").
Again, all simulations are performed with a memory
latency of 50 cycles.
We can make two important observations about the
graphs in Figure 9. First, the performance degradation
due to the introduction of the late commit model
is small for eight out of the ten programs. Programs
hydro2d, arc2d, su2cor, tomcatv and bdna all degrade
less than 5% with physical registers; programs flo52
and nasa7 degrade by 7% and 10.3%, respectively.
Nevertheless, performance of the other two programs,
trfd and dyfesm, is hurt rather severely when going to
the late commit model (a 41% and 47% degradation,
respectively). This behavior is explained by load-store
dependences. The main loop in trfd has a memory dependence
between the last vector store of iteration i
and the first vector load of iteration are
to the same address). In the early commit model, the
store is done as soon as its input data is ready (with
chaining between the producer and the store). In the
late commit model, the store must wait until 2 intervening
instructions between the producer and the
store have committed. This delays the dispatching of
the following load from the first iteration and explains
the high slowdown. A similar situation explains the
degradation in dyfesm.
Second, in the late commit model, 12 registers are
1.11.3
9
arc2d1.29
flo521.21.6
9
nasa71.21.6
9
trfd1.5Speedup
9
IDEAL
late
Figure
9: Speedups of the OOOVA over the reference architecture for different numbers of vector physical registers
under the early and late commit schemes.
clearly not enough. The performance difference between
12 and 16 registers is much larger than in the
early commit model. Thus, from a cost/complexity
point of view, the introduction of late commit has a
clear impact on the implementation of the vector registers
6 Dynamic Load Elimination
Register renaming with many physical registers
solves instruction issue bottlenecks caused by a limited
number of logical registers. However, there is another
problem caused by limited logical registers: register
spilling. The original compiled code still contains register
spills caused by the limited number of architected
registers, and to be functionally correct these spills
must be executed. Furthermore, besides the obvious
store-load spills, limited registers also cause repeated
loads from the same memory location.
Limited registers are common in vector architec-
tures, and the spill problem is aggravated because storing
and re-loading a single vector register involves the
movement of many words of data to and from memory.
To illustrate the importance of spill code for vector ar-
chitectures, table 3 shows the number of memory spill
operations (number of words moved) in the ten benchmark
programs. In some of the benchmarks relatively
few of the loads and stores are due to spills, but in
several there is a large amount of spill traffic. For ex-
ample, over 69% of the memory traffic in bdna is due
to spills.
In this section we propose and study a method that
uses register renaming to eliminate much of the memory
load traffic due to spills. The method we propose
also has significant performance advantages because a
Vector load ops Vector store ops Total
Program load spill % store spill % %
hydro2d 1297 21 1.6 431 21 5 2.4
arc2d 1244 122 9 479 87 15 11
nasa7 1048 21 2.0 632 20 3 2.4
su2cor 786 201 20 404 103 20 20
bdna 142 266
Table
3: Vector memory spill operations. Columns 2,
3, 5 and 6 are in millions of operations.
load for spilled data is executed in nearly zero time.
We do not eliminate spill stores, however, because of
the need to maintain strict binary compatibility. That
is, the memory image should reflect functionally correct
state. Relaxing compatibility could lead to removing
some spill stores, but we have not yet pursued
this approach.
6.1 Renaming under Dynamic Load Elim-
ination
To eliminate redundant load instructions we propose
the following technique. A tag is associated with
each physical register (A, S and V). This tag indicates
the memory locations currently being held by
the register. For vector registers, the tag is a 6-tuple:
define a consecutive region of bytes in memory and
vl, vs, and sz are the vector length, vector stride and
access granularity used when the tag was created; v is
a validity bit. For scalar registers, the tag is a 4-tuple
- vl and vs are not needed. Although the problem of
spilling scalar registers is somewhat tangential
to our study, they are important in the Convex architecture
because of its limited number of registers.
Each time a memory operation is performed, its
range of addresses is computed (this is done in the second
stage of the memory pipeline). If the operation is
a load, the tag associated with the destination physical
register is filled with the appropriate address informa-
tion. If the operation is a store, then the physical register
being stored to memory has its tag updated with
the corresponding address information. Thus, each
time a memory operation is performed, we "alias" the
register contents with the memory addresses used for
loading or storing the physical register: the tag indicates
an area in memory that matches the register
data.
To keep tag contents consistent with memory, when
a store instruction is executed its tag has to be compared
against all tags already present in the register
files. If any conflict is found, that is, if the memory
range defined by the store tag overlaps any of the existing
tags, these existing tags must be invalidated (to
simplify the conflict checking hardware, this invalidation
may be done conservatively).
By using the register tags, some vector load operations
can be eliminated in the following manner.
When a vector load enters the third stage of the memory
pipeline, its tag is checked against all tags found
in the vector register file. If an exact match is found
(an exact match requires all tag fields to be identical),
the destination register of the vector load is renamed
to the physical register it matches. At this point the
load has effectively been completed - in the time it
takes to do the rename. Furthermore, matching is not
restricted to live registers, it can also occur with a
physical register that is on the free list. As long as
the validity bit is set, any register (in the free list or
in use) is eligible for matching. If a load matches a
register in the free list, the register is taken from the
list and added to the register map table.
For scalar registers, eliminating loads is simpler.
When a match involving two scalar registers is de-
tected, the register value is copied from one register
to the other. The scalar rename table is not affected.
Note, however, that scalar store addresses still need
to be compared against vector register tags and vector
stores need to be compared against scalar tags to
ensure full consistency.
A similar memory tagging technique for scalar registers
is described in [2]. There, tagging is used to
store memory variables in registers in the face of potential
aliasing problems. That approach, though, is
complicated because data is automatically copied from
register to register when a tag match is found. There-
fore, compiler techniques are required to adapt to this
implied data movement. In our application, a tag operation
either (a) alters only the rename table or (b)
invalidates a tag without changing any register value.
Issue RF ALU Wb
A-regs
Issue RF
S-regs
Rename
Fetch
Issue RF Calculation
@ Range Dependency
Calculation
V-regs V-RENAME
RXBAR RXBAR EX0
Figure
10: The modified instruction pipelines for the
Dynamic Load Elimination OOOVA.
6.2 Pipeline modifications
With the scheme just described, when a vector load
is eliminated at the disambiguation stage of the memory
pipeline, the vector register renaming table is up-
dated. Renaming is considerably complicated if vector
registers are renamed in two different pipeline stages
(at the decode and disambiguation stages). Therefore,
the pipeline structure is modified to rename all vector
registers in one and only one stage.
Figure
shows the modified pipeline. At the decode
stage, all scalar registers are renamed but all
vector registers are left untouched. Then, all instructions
using a vector register pass in-order through the
3 stages of the memory pipeline. When they arrive
at the disambiguation stage, renaming of vector registers
is done. This ensures that all vector instruction
see the same renaming table and that modifications
introduced by the load elimination scheme are available
to all following vector instructions. Moreover,
this ensures that store tags are compared against all
previous tags in order.
6.3 Performance of dynamic load elimina-
tion
In this section we present the performance of the
OOOVA machine enhanced with dynamic load elimi-
nation. As a baseline we use the late commit OOOVA
described above, without dynamic load elimination.
We also study the OOOVA with load elimination for
scalar data only (SLE) and OOOVA with load elimination
for both scalars and vectors (SLE+VLE).
Figures
11 and 12 present the speedup of SLE
and SLE+VLE over the baseline OOOVA for different
numbers of physical vector registers (16, 32, 64).
For SLE+VLE with 16 vector registers (figure 12),
speedups over the base OOOVA are from 1.04 to 1.16
for most programs and are as high as 1.78 and 2.13 for
dyfesm and trfd. At registers registers, the
available storage space for keeping vector data doubles
and allows more tag matchings. The speedups increase
significantly and their range for most programs
is between 1.10 and 1.20. For dyfesm and trfd, the
speedups remain very high, but do not appreciably
improve when going from 16 to registers.
Doubling the number of vector registers again, to
64, does not yield much additional speedup. For most
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.11.3
over
Figure
11: Speedup of SLE over the OOOVA machine
for 3 different physical vector register file sizes.
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.5Speedup
over
Figure
12: Speedup of SLE+VLE over the OOOVA
machine for 3 different physical vector register file
sizes.
programs, the improvement is below 5%, and only
tomcatv and trfd seem to be able to take advantage
of the extra registers (tomcatv goes from 1.19 up to
1.40). The results show that most of the data movement
to be eliminated is captured with
vector registers.
The remarkably different performance behavior of
dyfesm and trfd requires explanation. This can be
done by looking at SLE (figure 11). Under SLE, all
other programs have very low speedups (less than
and, yet, trfd and dyfesm achieve speedups of
and 1.36, respectively (for the configuration with
vector registers). Our analysis of these two programs
shows that the ability to bypass scalar data allows
these programs to "see" more iterations of a certain
loop at once. In particular, the ability to bypass
data between loads and stores allows them to unroll
the two most critical loops, whereas without SLE, the
unrolling was not possible.
6.4 Traffic Reduction
A very important effect of dynamic load elimination
is that it reduces the total amount of traffic seen by
the memory system. This is a very important feature
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm0.8Traffic
Reduction SLE
Figure
13: Traffic reduction under dynamic load elimination
with physical vector registers.
in multiprocessing environments, where less load on
the memory modules usually translates into an overall
system performance improvement.
We have computed the traffic reduction of each of
the programs for the two dynamic load elimination
configurations considered. We define the traffic reduction
as the ratio between the total number of requests
(load and stores) sent over the address bus by the base-line
OOOVA divided by the total number of requests
done by either the SLE or the SLE+VLE configurations
Figure
13 present this ratio for physical vector
registers. As an example, figure 13 shows us that
the SLE configuration for dyfesm performs 11% fewer
memory requests than the OOOVA configuration.
As can be seen, for SLE+VLE, the typical traffic
reduction is between 15 and 20%. Programs dyfesm
and trfd, due to their special behavior already men-
tioned, have much larger reductions, as much as 40%.
Summary
In this paper we have considered the usefulness of
out-of-order execution and register renaming for vector
architectures. We have seen through simulation
that the traditional in-order vector execution model
is not enough to fully use the bandwidth of a single
memory port and to cover up for main memory latency
(even considering that the programs were memory
bound). We have shown that when out-of-order
issue and register renaming are introduced, vector performance
is increased. This performance advantage
can be realized even when adding only a few extra
physical registers to be used for renaming. Out-of-
order execution is as useful in a vector processor as it
is widely recognized to be in current superscalar microprocessors
Using only 12 physical vector registers and an aggressive
commit model, we have shown significant
speedups over the reference machine. At a modest
cost of 16 vector registers, the range of speedups was
1.24-1.72. Increasing the number of vector registers
up to 64 does not lead to significant extra improve-
ments, however.
Moreover, we have shown that large memory latencies
of up to 100 cycles can be easily tolerated. The
dynamic reordering of vector instructions and the disambiguation
mechanisms introduced allow the memory
unit to send a continuous flow of requests to the
memory system. This flow is overlapped with the arrival
of data and covers up main memory latency.
The introduction of register renaming gives a powerful
tool for implementing precise exceptions. By
changing the aggressive commit model into a conservative
model where an instruction only commits when
it (and all its predecessors) are known to be free of
exceptions, we can recover all the architectural state
at any point in time. This allows the easy introduction
of virtual memory. Our simulations have shown
that the implementation of precise exceptions costs
around 10% in application performance, though some
programs may be much more sensitive than others.
One problem not solved by register renaming is register
spilling. The addition of extra physical registers,
per se, does not reduce the amount of spilled data.
We have introduced a new technique, dynamic load
elimination, that uses the renaming mechanism to reduce
the amount of load spill traffic. By tagging all
our registers with memory information we can detect
when a certain load is redundant and its required data
is already in some other physical register. Under such
conditions, the load can be performed through a simple
rename table change. Our simulations have shown
that this technique can further improve performance
typically by factors of 1.07-1.16 (and as high as 1.78).
The dynamic load elimination technique can benefit
from more physical registers, since it can cache more
data inside the vector register file. Simulations with
physical vector registers show that load elimination
yields improvements typically in the range 1.10-1.20.
Moreover, at registers, load elimination can reduce
the total traffic to the memory system by factors ranging
between 15-20% and, in some cases, up to 40%.
Finally, we feel that our results should be of use to
the growing community of processor architectures implementing
some kind of multimedia extensions. As
graphics coprocessors and DSP functions are incorporated
into general purpose microprocessors, the advantages
of vector instruction sets will become more
evident. In order to sustain high throughput to and
from special purpose devices such as frame buffers,
long memory latencies will have to be tolerated. These
types of applications generally require high bandwidths
between the chip and the memory system
not available in current microprocessors. For both
bandwidth and latency problems, out-of-order vector
implementations can help achieve improved performance
--R
The T0 Vector Microprocessor.
A new kind of memory for referencing arrays and pointers.
Dixie: a trace generation system for the C3480.
Decoupled vector architectures.
Quantitative analysis of vector code.
The performance impact of vector processor caches.
The parallel processing feature of the NEC SX-3 supercomputer system
Cache performance in vector supercomput- ers
A Case for Intelligent DRAM: IRAM.
Relationship between average and real memory behavior.
The CRAY-1 computer system
Explaining the gap between theoretical peak performance and real performance for super-computer architectures
HNSX Supercomputers Inc.
Architecture of the VPP500 Parallel Supercomputer.
The Mips R10000 Superscalar Microprocessor.
--TR
CRegs: a new kind of memory for referencing arrays and pointers
Distributed storage control unit for the Hitachi S-3800 multivector supercomputer
Cache performance in vector supercomputers
Architecture of the VPP500 parallel supercomputer
Relationship between average and real memory behavior
Explaining the gap between theoretical peak performance and real performance for supercomputer architectures
The CRAY-1 computer system
The MIPS R10000 Superscalar Microprocessor
Decoupled vector architectures
Quantitative analysis of vector code
--CTR
Roger Espasa , Mateo Valero, Exploiting Instruction- and Data-Level Parallelism, IEEE Micro, v.17 n.5, p.20-27, September 1997
Luis Villa , Roger Espasa , Mateo Valero, A performance study of out-of-order vector architectures and short registers, Proceedings of the 12th international conference on Supercomputing, p.37-44, July 1998, Melbourne, Australia
Mark Hampton , Krste Asanovi, Implementing virtual memory in a vector processor with software restart markers, Proceedings of the 20th annual international conference on Supercomputing, June 28-July 01, 2006, Cairns, Queensland, Australia
Christos Kozyrakis , David Patterson, Overcoming the limitations of conventional vector processors, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Francisca Quintana , Jesus Corbal , Roger Espasa , Mateo Valero, Adding a vector unit to a superscalar processor, Proceedings of the 13th international conference on Supercomputing, p.1-10, June 20-25, 1999, Rhodes, Greece
Francisca Quintana , Jesus Corbal , Roger Espasa , Mateo Valero, A cost effective architecture for vectorizable numerical and multimedia applications, Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures, p.103-112, July 2001, Crete Island, Greece
Karthikeyan Sankaralingam , Stephen W. Keckler , William R. Mark , Doug Burger, Universal Mechanisms for Data-Parallel Architectures, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.303, December 03-05,
Christopher Batten , Ronny Krashinsky , Steve Gerding , Krste Asanovic, Cache Refill/Access Decoupling for Vector Machines, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.331-342, December 04-08, 2004, Portland, Oregon
Banit Agrawal , Timothy Sherwood, Virtually Pipelined Network Memory, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.197-207, December 09-13, 2006
Roger Espasa , Mateo Valero, A Simulation Study of Decoupled Vector Architectures, The Journal of Supercomputing, v.14 n.2, p.124-152, Sept. 1999
Christoforos Kozyrakis , David Patterson, Vector vs. superscalar and VLIW architectures for embedded multimedia benchmarks, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Roger Espasa , Mateo Valero , James E. Smith, Vector architectures: past, present and future, Proceedings of the 12th international conference on Supercomputing, p.425-432, July 1998, Melbourne, Australia
Jesus Corbal , Roger Espasa , Mateo Valero, MOM: a matrix SIMD instruction set architecture for multimedia applications, Proceedings of the 1999 ACM/IEEE conference on Supercomputing (CDROM), p.15-es, November 14-19, 1999, Portland, Oregon, United States | memory latency;memory traffic elimination;vector architecture;register renaming;microarchitecture;precise interrupts;out-of-order execution |
266819 | Improving code density using compression techniques. | We propose a method for compressing programs in embedded processors where instruction memory size dominates cost. A post-compilation analyzer examines a program and replaces common sequences of instructions with a single instruction codeword. A microprocessor executes the compressed instruction sequences by fetching code words from the instruction memory, expanding them back to the original sequence of instructions in the decode stage, and issuing them to the execution stages. We apply our technique to the PowerPC, ARM, and i386 instruction sets and achieve an average size reduction of 39%, 34%, and 26%, respectively, for SPEC CINT95 programs. | Introduction
According to a recent prediction by In-Stat Inc., the merchant processor market is set to
exceed $60 billion by 1999, and nearly half of that will be for embedded processors. However, by
unit count, embedded processors will exceed the number of general purpose microprocessors by a
factor of 20. Compared to general purpose microprocessors, processors for embedded applications
have been much less studied. The figures above suggest that they deserve more attention.
Embedded processors are more highly constrained by cost, power, and size than general purpose
microprocessors. For control oriented embedded applications, the most common type, a significant
portion of the final circuitry is used for instruction memory. Since the cost of an integrated
circuit is strongly related to die size, and memory size is proportional to die size, developers want
their program to fit in the smallest memory possible. An additional pressure on program memory
is the relatively recent adoption of high-level languages for embedded systems because of the
need to control development costs. As typical code sizes have grown, these costs have ballooned
at rates comparable to those seen in the desktop world. Thus, the ability to compress instruction
code is important, even at the cost of execution speed.
High performance systems are also impacted by program size due to the delays incurred by
instruction cache misses. A study at Digital [Perl96] showed that an SQL server on a DEC 21064
Alpha, is bandwidth limited by a factor of two on instruction cache misses alone. This problem
will only increase as the gap between processor performance and memory performance grows.
Reducing program size is one way to reduce instruction cache misses and achieve higher performance
[Chen97b].
This paper focuses on compression for embedded applications, where execution speed can be
traded for compression. We borrow concepts from the field of text compression and apply them to
the compression of instruction sequences. We propose modifications at the microarchitecture level
to support compressed programs. A post-compilation analyzer examines a program and replaces
common sequences of instructions with a single instruction codeword. A microprocessor executes
the compressed instruction sequences by fetching codewords from the instruction memory,
expanding them back to the original sequence of instructions in the decode stage, and issuing
them to the execution stages. We demonstrate our technique by applying it to the PowerPC
instruction set.
1.1 Code generation
Compilers generate code using a Syntax Directed Translation Scheme (SDTS) [Aho86]. Syntactic
source code patterns are mapped onto templates of instructions which implement the appropriate
semantics. Consider, a simple schema to translate a subset of integer arithmetic:
{
{
These patterns show syntactic fragments on the right hand side of the two productions which
are replaced (or reduced) by a simpler syntactic structure. Two expressions which are added (or
multiplied) together result in a single, new expression. The register numbers holding the operand
expressions ($1 and $3) are encoded into the add (multiplication) operation and emitted into the
generated object code. The result register ($1) is passed up the parse tree for use in the parent
operation. These two patterns are reused for all arithmetic operations throughout program compilation
More complex actions (such as translation of control structures) generate more instructions,
albeit still driven by the template structure of the SDTS.
In general, the only difference in instruction sequences for given source code fragments at different
points in the object module are the register numbers in arithmetic instructions and operand
offsets for load and store instructions. As a consequence, object modules are generated with many
common sub-sequences of instructions. There is a high degree of redundancy in the encoding of
the instructions in a program. In the programs we examined, only a small number of instructions
had bit pattern encodings that were not repeated elsewhere in the same program. Indeed, we found
that a small number of instruction encodings are highly reused in most programs.
To illustrate the redundancy of instruction encodings, we profiled the SPEC CINT95 benchmarks
[SPEC95]. The benchmarks were compiled for PowerPC with GCC 2.7.2 using -O2 opti-
mization. Figure 1 shows that compiled programs consist of many instructions that have identical
encodings. On average, less than 20% of the instructions in the benchmarks have bit pattern
encodings which are used exactly once in the program. In the go benchmark, for example, 1% of
the most frequent instruction words account for 30% of the program size, and 10% of the most
frequent instruction words account for 66% of the program size. It is clear that the redundancy of
instruction encodings provides a great opportunity for reducing program size through compression
techniques.
1.2 Overview of compression method
Our compression method finds sequences of instruction bytes that are frequently repeated
throughout a single program and replaces the entire sequence with a single codeword. All rewritten
(or encoded) sequences of instructions are kept in a dictionary which, in turn, is used at program
execution time to expand the singleton codewords in the instruction stream back into the
original sequence of instructions. All codewords assigned by the compression algorithm are
merely indices into the instruction dictionary.
The final compressed program consists of codewords interspersed with uncompressed instructions
Figure
2 illustrates the relationship between the uncompressed code, the compressed code,
and the dictionary. A complete description of our compression method is presented in Section 3.
compress gcc go ijpeg li
Benchmarks
0%
10%
20%
30%
40%
Program
Instructions Distinct instruction encodings used only once in program
Distinct instruction encodings used multiple times in program
Figure
1: Distinct instruction encodings as a percentage of entire program
Uncompressed Code
clrlwi r11,r9,24
addi r0,r11,1
cmplwi cr1,r0,8
ble cr1,000401c8
cmplwi cr1,r11,7
bgt cr1,00041d34
stb r18,0(r28)
clrlwi r11,r9,24
addi r0,r11,1
cmplwi cr1,r0,8
bgt cr1,00041c98
Compressed Code
CODEWORD #1
ble cr1,000401c8
cmplwi cr1,r11,7
bgt cr1,00041d34
CODEWORD #2
CODEWORD #1
bgt cr1,00041c98
Dictionary
clrlwi r11,r9,24
addi r0,r11,1
cmplwi cr1,r0,8
stb r18,0(r28)
Figure
2: Example of compression
Background and Related Work
In this section we will discuss strategies for text compression, and methods currently
employed by microprocessor manufacturers to reduce the impact of RISC instruction sets on program
size.
2.1 Text compression
Text compression methods fall into two general categories: statistical and dictionary.
Statistical compression uses the frequency of singleton characters to choose the size of the
codewords that will replace them. Frequent characters are encoded using shorter codewords so
that the overall length of the compressed text is minimized. Huffman encoding of text is a well-known
example.
Dictionary compression selects entire phrases of common characters and replaces them with a
single codeword. The codeword is used as an index into the dictionary entry which contains the
original characters. Compression is achieved because the codewords use fewer bits than the characters
they replace.
There are several criteria used to select between using dictionary and statistical compression
techniques. Two very important factors are the decode efficiency and the overall compression
ratio. The decode efficiency is a measure of the work required to re-expand a compressed text.
The compression ratio is defined by the formula:
(Eq.
Dictionary decompression uses a codeword as an index into the dictionary table, then inserts
the dictionary entry into the decompressed text stream. If codewords are aligned with machine
words, the dictionary lookup is a constant time operation. Statistical compression, on the other
hand, uses codewords that have different bit sizes, so they do not align to machine word bound-
aries. Since codewords are not aligned, the statistical decompression stage must first establish the
range of bits comprising a codeword before text expansion can proceed.
It can be shown that for every dictionary method there is an equivalent statistical method
which achieves equal compression and can be improved upon to give better compression [Bell90].
Thus statistical methods can always achieve better compression than dictionary methods albeit at
the expense of additional computation requirements for decompression. It should be noted, how-
ever, that dictionary compression yields good results in systems with memory and time constraints
because one entry expands to several characters. In general, dictionary compression
provides for faster (and simpler) decoding, while statistical compression yields a better compression
ratio.
2.2 Compression for RISC instruction sets
Although a RISC instruction set is easy to decode, its fixed-length instruction formats are
wasteful of program memory. Thumb [ARM95][MPR95] and MIPS16 [Kissell97] are two
compression ratio compressed size
original size
recently proposed instruction set modifications which define reduced instruction word sizes in an
effort to reduce the overall size of compiled programs.
Thumb is a subset of the ARM architecture consisting of 36 ARM 32-bit wide instructions
which have been re-encoded to require only 16 bits. The instructions included in Thumb either do
not require a full 32-bits, are frequently used, or are important to the compiler for generating
small object code. Programs compiled for Thumb achieve 30% smaller code in comparison to the
standard ARM instruction set [ARM95].
MIPS16 defines a 16-bit fixed-length instruction set architecture (ISA) that is a subset of
MIPS-III. The instructions used in MIPS16 were chosen by statistically analyzing a wide range of
application programs for the instructions most frequently generated by compilers. Code written
for 32-bit MIPS-III is typically reduced 40% in size when compiled for MIPS16 [Kissell97].
Both Thumb and MIPS16 act as preprocessors for their underlying architectures. In each case,
a 16-bit instruction is fetched from the instruction memory, expanded into a 32-bit wide instruc-
tion, and passed to the base processor core for execution.
Both the Thumb and MIPS16 shrink their instruction widths at the expense of reducing the
number of bits used to represent register designators and immediate value fields. This confines
Thumb and MIPS16 programs to 8 registers of the base architecture and significantly reduces the
range of immediate values.
As subsets of their base architectures, Thumb and MIPS16 are neither capable of generating
complete programs, nor operating the underlying machine. Thumb relies on 32-bit instructions
memory management and exception handling while MIPS16 relies on 32-bit instructions for
floating-point operations. Moreover, Thumb cannot exploit the conditional execution and zero-
latency shifts and rotates of the underlying ARM architecture. Both Thumb and MIPS16 require
special branch instructions to toggle between 32-bit and 16-bit modes.
The fixed set of instructions which comprise Thumb and MIPS16 were chosen after an assessment
of the instructions used by a range of applications. Neither architecture can access all regis-
ters, instructions, or modes of the underlying 32-bit core architecture.
In contrast, we derive our codewords and dictionary from the specific characteristics of the
program under execution. Because of this, a compressed program can access all the resources
available on the machine, yet can still exploit the compressibility of each individual program.
2.3 CCRP
The Compressed Code RISC Processor (CCRP) described in [Wolfe92][Wolfe94] has an
instruction cache that is modified to run compressed programs. At compile-time the cache line
bytes are Huffman encoded. At run-time cache lines are fetched from main memory, uncom-
pressed, and put in the instruction cache. Instructions fetched from the cache have the same
addresses as in the uncompressed program. Therefore, the core of the processor does not need
modification to support compression. However, cache misses are problematic because missed
instructions in the cache do not reside at the same address in main memory. CCRP uses a Line
Address Table (LAT) to map missed instruction cache addresses to main memory addresses where
the compressed code is located. The LAT limits compressed programs to only execute on processors
that have the same line size for which they were compiled.
One short-coming of CCRP is that it compresses on the granularity of bytes rather than full
instructions. This means that CCRP requires more overhead to encode an instruction than our
scheme which encodes groups of instructions. Moreover, our scheme requires less effort to
decode a program since a single codeword can encode an entire group of instructions. In addition,
our compression method does not need a LAT mechanism since we patch all branches to use the
new instruction addresses in the compressed program.
2.4 Liao et al.
A purely software method of supporting compressed code is proposed in [Liao96]. The
author finds mini-subroutines which are common sequences of instructions in the program. Each
instance of a mini-subroutine is removed from the program and replaced with a call instruction.
The mini-subroutine is placed once in the text of the program and ends with a return instruction.
Mini-subroutines are not constrained to basic blocks and may contain branch instructions under
restricted conditions. The prime advantage of this compression method is that it requires no hardware
support. However, the subroutine call overhead will slow program execution.
[Liao96] suggests a hardware modification to support code compression consisting primarily
of a call-dictionary instruction. This instruction takes two arguments: location and length. Common
instruction sequences in the program are saved in a dictionary, and the sequence is replaced
in the program with the call-dictionary instruction. During execution, the processor jumps to the
point in the dictionary indicated by location and executes length instructions before implicitly
returning. [Liao96] limits the dictionary to use sequences of instructions within basic blocks only.
[Liao96] does not explore the trade-off of the field widths for the location and length arguments
in the call-dictionary instruction. Only codewords that are 1 or 2 instruction words in size
are considered. This requires the dictionary to contain sequences with at least 2 or 3 instructions,
respectively, since shorter sequences would be no bigger than the call-dictionary instruction and
no compression would result.
Since single instructions are the most frequently occurring patterns, it is important to use a
scheme that can compress them. In this paper we vary the parameters of dictionary size (the number
of entries in the dictionary) and the dictionary entry length (the number of instructions at each
dictionary entry) thus allowing us to examine the efficacy of compressing instruction sequences of
any length.
3 Compression Method
3.1 Algorithm
Our compression method is based on the technique introduced in [Bird96][Chen97a]. A dictionary
compression algorithm is applied after the compiler has generated the program. We take
advantage of SDTS and find common sequences of instructions to place in the dictionary. Our
algorithm is divided into 3 steps:
1. Build the dictionary
2. Replace instruction sequences with codewords
3. Encode the codewords
3.1.1 Dictionary content
For an arbitrary text, choosing those entries of a dictionary that achieve maximum compression
is NP-complete in the size of the text [Storer77]. As with most dictionary methods, we use a
greedy algorithm to quickly determine the dictionary entries 1 . On every iteration of the algorithm,
we examine each potential dictionary entry and find the one that results in the largest immediate
savings. The algorithm continues to pick dictionary entries until some termination criteria has
been reached; this is usually the exhaustion of the codeword space. The maximum number of dictionary
entries is determined by the choice of the encoding scheme for the codewords. Obviously,
codewords with more bits can index a larger range of dictionary entries. We limit the dictionary
entries to sequences of instructions within a basic block. We allow branch instructions to branch
to codewords, but they may not branch within encoded sequences. We also do not compress
branches with offset fields. These restrictions simplify code generation.
3.1.2 Replacement of instructions by codewords
Our greedy algorithm combines the step of building the dictionary with the step of replacing
instruction sequences. As each dictionary entry is defined, all of its instances in the program are
replaced with a token. This token is replaced with an efficient encoding in the encoding step.
3.1.3 Encoding
Encoding refers to the representation of the codewords in the compressed program. As discussed
in Section 2.1, variable-length codewords, (such as those used in the Huffman encoding in
are expensive to decode. A fixed-length codeword, on the other hand, can be used
directly as an index into the dictionary making decoding a simple table lookup operation.
Our baseline compression method uses a fixed-length codeword to enable fast decoding. We
also investigate a variable-length scheme. However, we restrict the variable-length codewords to
be a multiple of some basic unit. For example, we present a compression scheme with codewords
that are 4 bits, 8 bits, 12 bits, and 16 bits. All instructions (compressed and uncompressed) are
aligned to the size of the smallest codeword. The shortest codewords encode the most frequent
dictionary entries to maximize the savings. This achieves better compression than a fixed-length
encoding, but complicates decoding.
3.2 Related Issues
3.2.1 Branch instructions
One side effect of any compression scheme is that it alters the locations of instructions in the
program. This presents a special problem for branch instructions, since branch targets change as a
result of program compression.
1. Greedy algorithms are often near-optimal in practice.
For this study, we do not compress relative branch instructions (i.e. those containing an offset
field used to compute a branch target). This makes it easy for us to patch the offset fields of the
branch instruction after compression. If we allowed compression of relative branches, we might
need to rewrite codewords representing relative branches after a compression pass; but this would
affect relative branch targets thus requiring a rewrite of codewords, etc. The result is a NP-complete
problem [Szymanski78].
Indirect branches are compressed in our study. Since these branches take their target from a
register, the branch instruction itself does not need to be patched after compression, so it cannot
create the codeword rewriting problem outlined above. However, jump tables (containing program
addresses) need to be patched to reflect any address changes due to compression. GCC puts jump
table data in the .text section immediately following the branch instruction. We assume that
this table could be relocated to the .data section and patched with the post-compression branch
target addresses.
3.2.2 Branch targets in fixed-length instruction sets
Fixed-length instruction sets typically restrict branches to use targets that are aligned to
instruction word boundaries. Since our primary concern is code size, we trade-off the performance
advantages of aligned fixed-length instructions in exchange for more compact code. We
use codewords that are smaller than instruction word boundaries and align them to the size of the
smallest codeword (4 bits in this study). Therefore, we need to specify a method to address branch
targets that do not fall at instruction word boundaries.
One solution is to pad the compressed program so that all branch targets are aligned as defined
by the original ISA. The obvious disadvantage of this solution is that it will decrease the compression
ratio.
A more complex solution (the one we have adopted for our experiments) is to modify the control
unit of the processor to treat the branch offsets as aligned to the size of the smallest codeword.
For example, if the size of a codeword is 8 bits, then a 32-bit aligned instruction set would have its
branch offset range reduced by a factor of 4. Table 1 shows that most branches in the benchmarks
do not use the entire range of their offset fields. The post-compilation compressor modifies all
branch offsets to use the alignment of the codewords. Branches requiring larger ranges are modified
to load their targets through jump tables. Of course, this will result in a slight increase in the
code size for these branch sequences.
3.3 Compressed program processor
The general design for a compressed program processor is given in Figure 3. We assume that
all levels of the memory hierarchy will contain compressed instructions to conserve memory.
Since the compressed program may contain both compressed and uncompressed instructions,
there are two paths from the program memory to the processor core. Uncompressed instructions
proceed directly to the normal instruction decoder. Compressed instructions must first be translated
using the dictionary before being decoded and executed in the usual manner. The dictionary
could be loaded in a variety of ways. If the dictionary is small, one possibility is to place it in a
permanent on-chip memory. Alternatively, if the dictionary is larger, it might be kept as a data
segment of the compressed program and each dictionary entry could be loaded as needed.
4 Experiments
In this section we integrate our compression technique into the PowerPC instruction set. We
compiled the SPEC CINT95 benchmarks with GCC 2.7.2 using -O2 optimization. The optimizations
include common sub-expression elimination. They do not include function in-lining and
loop unrolling since these optimizations tend to increase code size. Linking was done statically so
Table
1: Usage of bits in branch offset field
Bench
number of
relative
Branches
Branch offsets not wide
enough to provide 2-byte
resolution to branch targets
Branch offsets not wide
enough to provide 1-byte
resolution to branch targets
Branch offsets not wide
enough to provide 4-bit
resolution to branch targets
Number Percent Number Percent Number Percent
compress
li 4,806 0 0.00%
perl 14,578 15 0.10% 74 0.51% 191 1.31%
vortex
CPU core
Figure
3: Overview of compressed program processor
uncompressed
instruction
stream
Dictionary
Compressed
program memory
(usually ROM)
that the libraries are included in the results. All compressed program sizes include the overhead of
the dictionary.
Recall that we are interested in the dictionary size (number of codewords) and dictionary
entry length (number of instructions at each dictionary entry).
4.1 Baseline compression method
In our baseline compression method, we use codewords of 2-bytes. The first byte is an escape
byte that has an illegal PowerPC opcode value. This allows us to distinguish between normal
instructions and compressed instructions. The second byte selects one of 256 dictionary entries.
Dictionary entries are limited to a length of 16 bytes (4 PowerPC instructions). PowerPC has 8
illegal 6-bit opcodes. By using all 8 illegal opcodes and all possible patterns of the remaining 2
bits in the byte, we can have up to 32 different escape bytes. Combining this with the second byte
of the codeword, we can specify up to 8192 different codewords. Since compressed instructions
use only illegal opcodes, any processor designed to execute programs compressed with the base-line
method will be able to execute the original programs as well.
Our first experiments vary the parameters of the baseline method. Figure 4 shows the
effect of varying the dictionary entry length. Interestingly, when dictionary entries are allowed to
contain 8 instructions, the overall compression begins to decline. This can be attributed to our
greedy selection algorithm for generating the dictionary. Selecting large dictionary entries
removes some opportunities for the formation of smaller entries. The large entries are chosen
because they result in an immediate reduction in the program size. However, this does not guarantee
that they are the best entries to use for achieving good compression. When a large sequence is
replaced, it destroys the small sequences that partially overlapped with it. It may be that the savings
of using the multiple smaller sequences would be greater than the savings of the single large
sequence. However, our greedy algorithm does not detect this case and some potential savings is
lost. In general, dictionary entry sizes above 4 instructions do not improve compression noticeably
Figure
5 illustrates what happens when the number of codewords (entries in the dictionary)
increases. The compression ratio for each program continues to improve until a maximum amount
of codewords is reached, after which only unique, single use encodings remain uncompressed.
Table
2 lists the maximum number of codewords for each program under the baseline compression
method, representing an upper bound on the size of the dictionary.
compress gcc go ijpeg li
Benchmarks
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
Compression
Ratio 1357Maximum number
of instructions in
each dictionary entry
Figure
4: Effect of dictionary entry size on compression ratio
The benchmarks contain numerous instructions that occur only a few times. As the dictionary
becomes large, there are more codewords available to replace the numerous instruction
encodings that occur infrequently. The savings of compressing an individual instruction is tiny,
but when it is multiplied over the length of the program, the compression is noticeable. To achieve
good compression, it is more important to increase the number of codewords in the dictionary
rather than increase the length of the dictionary entries. A few thousand codewords is enough for
most SPEC CINT95 programs.
4.1.1 Usage of the dictionary
Since the usage of the dictionary is similar across all the benchmarks, we show results
using ijpeg as a representative benchmark. We extend the baseline compression method to use
dictionary entries with up to 8 instructions. Figure 6 shows the composition of the dictionary by
the number of instructions the dictionary entries contain. The number of dictionary entries with
only a single instruction ranges between 48% and 80%. Not surprisingly, the larger the dictionary,
the higher the proportion of short dictionary entries. Figure 7 shows which dictionary entries contribute
the most to compression. Dictionary entries with 1 instruction achieve between 48% and
60% of the compression savings. The short entries contribute to a larger portion of the savings as
the size of the dictionary increases. The compression method in [Liao96] cannot take advantage
Table
2: Maximum number of codewords used in baseline compression (max. dictionary entry
Bench
Maximum Number of
Codewords Used
compress 647
go 3123
ijpeg 2107
li 1104
perl 2970
vortex 3545
compress gcc go ijpeg li
Benchmarks
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
Compression
Number of
codewords
Figure
5: Effect of number of codewords on compression ratio
of this since the codewords are the size of single instructions, so single instructions are not compressed
4.1.2 Compression using small dictionaries
Some implementations of a compressed code processor may be constrained to use small dic-
tionaries. We investigated compression with dictionaries ranging from 128 bytes to 512 bytes in
size. We present one compression scheme to demonstrate that compression can be beneficial even
for small dictionaries. Our compression scheme for small dictionaries uses 1-byte codewords and
dictionary entries of up to 4 instructions in size. Figure 8 shows results for dictionaries with 8, 16,
and entries. On average, a dictionary size of 512 bytes is sufficient to get a code reduction of
15%.
4.1.3 Variable-length codewords
In the baseline method, we used 2-byte codewords. We can improve our compression ratio by
using smaller encodings for the codewords. Figure 9 shows that when the baseline compression
uses 8192 codewords, 40% of the compressed program bytes are codewords. Since the baseline
compression uses 2-byte codewords, this means 20% of the final compressed program size is due
to escape bytes. We investigated several compression schemes using variable-length codewords
Size of dictionary (number of entries)
0%
20%
40%
80%
100%
Percentage
of
dictionary
Figure
Composition of dictionary for ijpeg (max. dictionary
Length of dictionary entry
(number of instructions)
Size of dictionary (number of entries)
10.0%
20.0%
30.0%
40.0%
Program
bytes
removed2468
Length of dictionary entry
due
to
compression
Figure
7: Bytes saved in compression of ijpeg according to instruction length of dictionary entry
(number of instructions)
aligned to 4-bits (nibbles). Although there is a higher decode penalty for using variable-length
codewords, we are able to achieve better compression. By restricting the codewords to integer
multiples of 4-bits, we have given the decoding process regularity that the 1-bit aligned Huffman
encoding in [Wolfe94] lacks.
Our choice of encoding is based on SPEC CINT95 benchmarks. We present only the best
encoding choice we have discovered. We use codewords that are 4-bits, 8-bits, 12-bits, and 16-bits
in length. Other programs may benefit from different encodings. For example, if many codewords
are not necessary for good compression, then more 4-bit and 8-bit code words could be used to
further reduce the codeword overhead.
A diagram of the nibble aligned encoding is shown in Figure 10. This scheme is predicated on
the observation that when an unlimited number of codewords are used, the final compressed program
size is dominated by codeword bytes. Therefore, we use the escape code to indicate (less
uncompressed instructions rather than codewords. The first 4-bits of the codeword
determine the length of the codeword. With this scheme, we can provide 128 8-bit codewords, and
a few thousand 12-bit and 16-bit codewords. This offers the flexibility of having many short codewords
(thus minimizing the impact of the frequently used instructions), while allowing for a large
overall number of codewords. One nibble is reserved as an escape code for uncompressed instruc-
compress gcc go ijpeg li
Benchmarks
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
Compression
Number of
codewords
Number of
codewords
Figure
8: Compression Ratio for 1-byte codewords with up to 4 instructions/entry
Figure
9: Composition of Compressed Program (8192 2-byte codewords, 4 instructions/entry)
compress gcc go ijpeg li
Benchmarks
10.0%
20.0%
30.0%
40.0%
50.0%
70.0%
80.0%
90.0%
100.0%
Compressed
Program
Size
Dictionary
Codewords: escape bytes
Codewords: index bytes
Uncompressed Instructions
tions. We reduce the codeword overhead by encoding the most frequent sequences of instructions
with the shortest codewords.
Using this encoding technique effectively redefines the entire instruction set encoding, so this
method of compression can be used in existing instruction sets that have no available escape
bytes. Unfortunately, this also means that the original programs will no longer execute unmodified
on processors that execute compressed programs without mode switching.
Our results for the 4-bit aligned compression are presented in Figure 11. We obtain a code
reduction of between 30% and 50% depending on the benchmark. For comparison, we extracted
the instruction bytes from the benchmarks and compressed them with Unix Compress. Compress
uses an adaptive dictionary technique (based on Ziv-Lempel coding) which can modify the dictionary
in response to changes in the characteristics of the text. In addition, it also uses Huffman
encoding on its codewords, and thus should be able to achieve better compression than our
method. Figure 11 shows that Compress does indeed do better, but our compression ratio is still
within 5% for all benchmarks.
Figure
10: Nibble Aligned Encoding
Instruction
128 8-bit codewords
1536 12-bit codewords
4096 16-bit codewords
36-bit uncompressed instruction
Figure
11: Comparison of nibble aligned compression with Unix Compress
compress gcc go ijpeg li
Benchmarks
10.0%
20.0%
30.0%
40.0%
50.0%
70.0%
80.0%
90.0%
100.0%
Compression
Compression with nibble aligned codewords
Unix Compress
5 Conclusions and Future Work
We have proposed a method of compressing programs for embedded microprocessors where
program size is limited. Our approach combines elements of two previous proposals. First we use
a dictionary compression method (as in [Liao96]) that allows codewords to expand to several
instructions. Second, we allow the codewords to be smaller than a single instruction (as in
[Wolfe94]). We find that the size of the dictionary is the single most important parameter in attaining
a better compression ratio. The second most important factor is reducing the codeword size
below the size of a single instruction. We find that much of our savings comes from compressing
patterns of single instructions. Our most aggressive compression for SPEC CINT95 achieves a
30% to 50% code reduction.
Our compression ratio is similar to that achieved by Thumb and MIPS16. While Thumb and
MIPS16 designed a completely new instruction set, compiler, and instruction decoder, we
achieved our results only by processing compiled object code and slightly modifying the instruction
fetch mechanism.
There are several ways that our compression method can be improved. First, the compiler
could attempt to produce instructions with similar byte sequences so they could be more easily
compressed. One way to accomplish this is by allocating registers so that common sequences of
instructions use the same registers. Another way is to generate more generalized STDS code
sequences. These would be less efficient, but would be semantically correct in a larger variety of
circumstances. For example, in most optimizing compilers, the function prologue sequence might
save only those registers which are modified within the body of the function. If the prologue
sequence were standardized to always save all registers, then all instructions of the sequence
could be compressed to a single codeword. This space saving optimization would decrease code
size at the expense of execution time. Table 3 shows that the prologue and epilogue combined typically
account for 12% of the program size, so this type of compression would provide significant
size reduction.
We also plan to explore the performance aspects of our compression and examine the trade-offs
in partitioning the on-chip memory for the dictionary and program.
Table
3: Prologue and epilogue code in benchmarks
Bench
Static prologue instructions
(percentage of entire program)
Static epilogue instructions
(percentage of entire program)
compress 5.3% 6.2%
gcc 4.2% 4.9%
go 6.2% 6.8%
ijpeg 6.9% 9.4%
li 8.1% 9.9%
perl 3.7% 4.3%
vortex 6.3% 7.1%
6
--R
Compiler: Principles
Advanced RISC Machines Ltd.
An Instruction Stream Compression Technique
The Impact of Instruction Compression on I-cache Performance
Enhancing Instruction Fetching Mechanism Using Data Com- pression
High-density MIPS for the Embedded Market
Code Generation and Optimization for Embedded Digital Signal Processors
"Thumb Squeezes ARM Code Size"
Studies of Windows NT performance using dynamic execution traces
"NP-completeness results concerning data compression,"
"Assembling code for machines with span-dependent instructions"
Executing Compressed Programs on an Embedded RISC Architecture
Compression of Embedded System Programs
--TR
Text compression
Executing compressed programs on an embedded RISC architecture
Studies of Windows NT performance using dynamic execution traces
Assembling code for machines with span-dependent instructions
Compression of Embedded System Programs
Code generation and optimization for embedded digital signal processors
Enhancing the instruction fetching mechanism using data compression
--CTR
Seok-Won Seong , Prabhat Mishra, A bitmask-based code compression technique for embedded systems, Proceedings of the 2006 IEEE/ACM international conference on Computer-aided design, November 05-09, 2006, San Jose, California
Heidi Pan , Krste Asanovi, Heads and tails: a variable-length instruction format supporting parallel fetch and decode, Proceedings of the 2001 international conference on Compilers, architecture, and synthesis for embedded systems, November 16-17, 2001, Atlanta, Georgia, USA
Chang Hong Lin , Yuan Xie , Wayne Wolf, LZW-Based Code Compression for VLIW Embedded Systems, Proceedings of the conference on Design, automation and test in Europe, p.30076, February 16-20, 2004
Chang Hong Lin , Yuan Xie , Wayne Wolf, Code compression for VLIW embedded systems using a self-generating table, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.15 n.10, p.1160-1171, October 2007
Haris Lekatsas , Jrg Henkel , Wayne Wolf, Design and simulation of a pipelined decompression architecture for embedded systems, Proceedings of the 14th international symposium on Systems synthesis, September 30-October 03, 2001, Montral, P.Q., Canada
Seok-Won Seong , Prabhat Mishra, An efficient code compression technique using application-aware bitmask and dictionary selection methods, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Haris Lekatsas , Jrg Henkel , Wayne Wolf, Code compression for low power embedded system design, Proceedings of the 37th conference on Design automation, p.294-299, June 05-09, 2000, Los Angeles, California, United States
X. H. Xu , C. T. Clarke , S. R. Jones, High performance code compression architecture for the embedded ARM/THUMB processor, Proceedings of the 1st conference on Computing frontiers, April 14-16, 2004, Ischia, Italy
Jari Heikkinen , Jarmo Takala, Effects of program compression, Journal of Systems Architecture: the EUROMICRO Journal, v.53 n.10, p.679-688, October, 2007
Paulo Centoducatte , Guido Araujo , Ricardo Pannain, Compressed Code Execution on DSP Architectures, Proceedings of the 12th international symposium on System synthesis, p.56, November 01-04, 1999
Timothy Sherwood , Brad Calder, Patchable instruction ROM architecture, Proceedings of the 2001 international conference on Compilers, architecture, and synthesis for embedded systems, November 16-17, 2001, Atlanta, Georgia, USA
Talal Bonny , Joerg Henkel, Efficient code density through look-up table compression, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Youtao Zhang , Jun Yang , Rajiv Gupta, Frequent value locality and value-centric data cache design, ACM SIGOPS Operating Systems Review, v.34 n.5, p.150-159, Dec. 2000
Youtao Zhang , Jun Yang , Rajiv Gupta, Frequent value locality and value-centric data cache design, ACM SIGPLAN Notices, v.35 n.11, p.150-159, Nov. 2000
Haris Lekatsas , Jrg Henkel , Wayne Wolf, Code compression as a variable in hardware/software co-design, Proceedings of the eighth international workshop on Hardware/software codesign, p.120-124, May 2000, San Diego, California, United States
Shlomo Weiss , Roman Tsikel, Approximate prefix coding for system-on-a-chip programs, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.13-15, p.367-375, May
Darko Kirovski , Johnson Kin , William H. Mangione-Smith, Procedure Based Program Compression, International Journal of Parallel Programming, v.27 n.6, p.457-475, 1999
E. Wanderley Netto , R. Azevedo , P. Centoducatte , G. Araujo, Multi-profile based code compression, Proceedings of the 41st annual conference on Design automation, June 07-11, 2004, San Diego, CA, USA
Guido Araujo , Paulo Centoducatte , Mario Cartes , Ricardo Pannain, Code compression based on operand factorization, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.194-201, November 1998, Dallas, Texas, United States
Jack Liu , Fred Chow , Timothy Kong , Rupan Roy, Variable Instruction Set Architecture and Its Compiler Support, IEEE Transactions on Computers, v.52 n.7, p.881-895, July
Jun Yang , Youtao Zhang , Rajiv Gupta, Frequent value compression in data caches, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.258-265, December 2000, Monterey, California, United States
Israel Waldman , Shlomit S. Pinter, Profile-driven compression scheme for embedded systems, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Montserrat Ros , Peter Sutton, A post-compilation register reassignment technique for improving hamming distance code compression, Proceedings of the 2005 international conference on Compilers, architectures and synthesis for embedded systems, September 24-27, 2005, San Francisco, California, USA
Brad Calder , Chandra Krintz , Urs Hlzle, Reducing transfer delay using Java class file splitting and prefetching, ACM SIGPLAN Notices, v.34 n.10, p.276-291, Oct. 1999
Alberto Macii , Enrico Macii , Fabrizio Crudo , Roberto Zafalon, A New Algorithm for Energy-Driven Data Compression in VLIW Embedded Processors, Proceedings of the conference on Design, Automation and Test in Europe, p.10024, March 03-07,
Andreas Dandalis , Viktor K. Prasanna, Configuration compression for FPGA-based embedded systems, Proceedings of the 2001 ACM/SIGDA ninth international symposium on Field programmable gate arrays, p.173-182, February 2001, Monterey, California, United States
Montserrat Ros , Peter Sutton, A hamming distance based VLIW/EPIC code compression technique, Proceedings of the 2004 international conference on Compilers, architecture, and synthesis for embedded systems, September 22-25, 2004, Washington DC, USA
Kelvin Lin , Chung-Ping Chung , Jean Jyh-Jiun Shann, Compressing MIPS code by multiple operand dependencies, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.4, p.482-508, November
Montserrat Ros , Peter Sutton, Compiler optimization and ordering effects on VLIW code compression, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Shao-Yang Wang , Rong-Guey Chang, Code size reduction by compressing repeated instruction sequences, The Journal of Supercomputing, v.40 n.3, p.319-331, June 2007
Mats Brorsson , Mikael Collin, Adaptive and flexible dictionary code compression for embedded applications, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Talal Bonny , Joerg Henkel, Using Lin-Kernighan algorithm for look-up table compression to improve code density, Proceedings of the 16th ACM Great Lakes symposium on VLSI, April 30-May 01, 2006, Philadelphia, PA, USA
Arvind Krishnaswamy , Rajiv Gupta, Dynamic coalescing for 16-bit instructions, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.1, p.3-37, February 2005
Kelvin Lin , Jean Jyh-Jiun Shann , Chung-Ping Chung, Code compression by register operand dependency, Journal of Systems and Software, v.72 n.3, p.295-304, August 2004
Keith D. Cooper , Nathaniel McIntosh, Enhanced code compression for embedded RISC processors, ACM SIGPLAN Notices, v.34 n.5, p.139-149, May 1999
John Gilbert , David M. Abrahamson, Adaptive object code compression, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Greive , Gunnar Braun , Andreas Andreas , Rainer Leupers , Oliver Schliebusch , Heinrich Meyr, Instruction encoding synthesis for architecture exploration using hierarchical processor models, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Ahmad Zmily , Christos Kozyrakis, Simultaneously improving code size, performance, and energy in embedded processors, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Stephen Hines , David Whalley , Gary Tyson, Adapting compilation techniques to enhance the packing of instructions into registers, Proceedings of the 2006 international conference on Compilers, architecture and synthesis for embedded systems, October 22-25, 2006, Seoul, Korea
Marc L. Corliss , E. Christopher Lewis , Amir Roth, A DISE implementation of dynamic code decompression, ACM SIGPLAN Notices, v.38 n.7, July
Charles Lefurgy , Eva Piccininni , Trevor Mudge, Evaluation of a high performance code compression method, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.93-102, November 16-18, 1999, Haifa, Israel
Lars Rder Clausen , Ulrik Pagh Schultz , Charles Consel , Gilles Muller, Java bytecode compression for low-end embedded systems, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.3, p.471-489, May 2000
Stephen Hines , Joshua Green , Gary Tyson , David Whalley, Improving Program Efficiency by Packing Instructions into Registers, ACM SIGARCH Computer Architecture News, v.33 n.2, p.260-271, May 2005
Stephen Hines , Gary Tyson , David Whalley, Reducing Instruction Fetch Cost by Packing Instructions into RegisterWindows, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.19-29, November 12-16, 2005, Barcelona, Spain
Subash Chandar , Mahesh Mehendale , R. Govindarajan, Area and Power Reduction of Embedded DSP Systems using Instruction Compression and Re-configurable Encoding, Journal of VLSI Signal Processing Systems, v.44 n.3, p.245-267, September 2006
Chandra Krintz , Brad Calder , Han Bok Lee , Benjamin G. Zorn, Overlapping execution with transfer using non-strict execution for mobile programs, ACM SIGOPS Operating Systems Review, v.32 n.5, p.159-169, Dec. 1998
Yuan Xie , Wayne Wolf , Haris Lekatsas, A code decompression architecture for VLIW processors, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Jeremy Lau , Stefan Schoenmackers , Timothy Sherwood , Brad Calder, Reducing code size with echo instructions, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Marc L. Corliss , E. Christopher Lewis , Amir Roth, DISE: a programmable macro engine for customizing applications, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Marc L. Corliss , E. Christopher Lewis , Amir Roth, The implementation and evaluation of dynamic code decompression using DISE, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.1, p.38-72, February 2005
Luca Benini , Francesco Menichelli , Mauro Olivieri, A Class of Code Compression Schemes for Reducing Power Consumption in Embedded Microprocessor Systems, IEEE Transactions on Computers, v.53 n.4, p.467-482, April 2004
Y. Larin , Thomas M. Conte, Compiler-driven cached code compression schemes for embedded ILP processors, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.82-92, November 16-18, 1999, Haifa, Israel
Stephen Roderick Hines , Gary Tyson , David Whalley, Addressing instruction fetch bottlenecks by using an instruction register file, ACM SIGPLAN Notices, v.42 n.7, July 2007
Oliver Rthing , Jens Knoop , Bernhard Steffen, Sparse code motion, Proceedings of the 27th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p.170-183, January 19-21, 2000, Boston, MA, USA
Christopher W. Fraser, An instruction for direct interpretation of LZ77-compressed programs, SoftwarePractice & Experience, v.36 n.4, p.397-411, April 2006
rpd Beszdes , Rudolf Ferenc , Tibor Gyimthy , Andr Dolenc , Konsta Karsisto, Survey of code-size reduction methods, ACM Computing Surveys (CSUR), v.35 n.3, p.223-267, September | embedded systems;compression;code density;Code Space Optimization |
266820 | Procedure based program compression. | Cost and power consumption are two of the most important design factors for many embedded systems, particularly consumer devices. Products such as personal digital assistants, pagers with integrated data services and smart phones have fixed performance requirements but unlimited appetites for reduced cost and increased battery life. Program compression is one technique that can be used to attack both of these problems. Compressed programs require less memory, thus reducing the cost of both direct materials and manufacturing. Furthermore, by relying on compressed memory, the total number of memory references is reduced. This reduction saves power by lowering the traffic on high-capacitance buses. This paper discusses a new approach to implementing transparent program compression that requires little or no hardware support. Procedures are compressed individually, and a directory structure is used to bind them together at run-time. Decompressed procedures are explicitly cached in ordinary RAM as complete units, thus resolving references within each procedure. This approach has been evaluated on a set of 25 embedded multimedia and communications applications, and results in an average memory reduction of 40% with a run-time performance overhead of 10%. | Introduction
We will present a technique for saving power and
reducing cost in embedded systems. We are concerned
primarily with data-rich consumer devices used for
computation and communications, e.g. the so-called
information appliance. Currently this product category
includes devices such as simple Personal Digital
Assistants, pagers and cell phones. In the future, there
is an emerging industry vision of ubiquitous multimedia
devices and the Java appliance. These products have
extremely tight constraints on component cost. It is not
at all uncommon for memory to be one of the most
expensive components in these products, thus providing
the need to reduce the size of stored programs. A
second important design goal is low power
consumption. Each of these products is battery
powered, and reduced power consumption can be
directly translated into extended battery life. Battery
life is often the most important factor for this product
class: once a pager or cellular phone is functionally
verified, battery life is one of the few effective
techniques for product differentiation. For a large
number of embedded systems, the power used to access
memory on the processor bus is the dominant factor in
the system power consumption.
Unlike desktop computing, performance often is not
a primary factor for these devices. While information
appliances are not the classic forms of mission-critical
computing, they tend to have important real-time
aspects. For example, a certain amount of processor
performance is necessary to decode a paging message;
there is little benefit in providing more.
Faced with these factors, we have decided to
investigate the benefits storing programs in a
compressed form. The compressed programs may
reside in any type of memory, often depending on
whether the system supports software field upgrades.
The basic approach is to store the program image in a
compressed form, and dynamically decompress it on
demand. An effective compression scheme would
reduce the amount of system memory required for
various applications, thus saving cost, board space and
some static power consumption. An additional benefit
involves significantly reduced power consumption due
to dynamic memory references. While we believe that
an effective compression scheme will reduce power
consumption, we cannot currently provide any direct
evidence of the relationship.
1.1 Previous Approaches
The relevant previous work can be divided into four
groups: whole program transformation, cache-based
dictionary schemes and highly encoded
instruction set architectures.
The most direct technique for compressing
programs involves explicit compression and
decompression of the complete program. A portion of
RAM is dedicated for a decompressed buffer, and
programs are expanded from their compressed form into
this RAM prior to execution. This approach has been
applied to file systems [1] for saving disk space.
Virtual memory and explicit file caches are usually
effective at reducing the impact of the decompression
algorithms on latency. Unfortunately, this approach is
not well suited to embedded systems. Some
information appliances have only a single application,
for example a data-rich pager with a vertical-market
application such as the Motorola SportsTrax news
device. Whole program decompression for such
systems would result in RAM size exceeding ROM size,
which increases both cost and power consumption.
Wolfe, Chanin and Kozuch presented a scheme for
block based decompression in response to dynamic
demands [2, 3]. Their goal was to improve code
density of general-purpose processor architectures.
They considered a number of compression algorithms,
and concluded that a Huffman code [4], which reduced
code size to 74% of the original, was most effective.
Programs are decompressed as they are brought into the
instruction cache, and thus the compression is
transparent to the executing application. One problem
that Wolfe et al. identified involves translating memory
addresses between the program space (i.e.
decompressed) and the compressed program in the
backing store. For example, if the program does a PC
relative jump, which hits in the cache, the ordinary
cache hardware will resolve the reference. However, in
the case of a cache miss the refill hardware must
determine where in the compressed space the target is
stored. This problem requires a set of jump tables to
patch references from the program space to the
compressed space. A link time tool can be used to
automatically generate the necessary jump tables.
Liao, Devadas and Keutzer developed a dictionary
approach to reduce code size for DSP processors [5].
A correlation is done across all basic blocks in a
program after compilation but before linking. The
purpose of this correlation is to identify common
instruction sequences that exceed some minimal length.
These sequences are written into a dictionary, and each
of the original occurrences is replaced with a mini-
subroutine call (sic), i.e. a procedure call with no
arguments. The application code is a skeleton of
procedure calls and bits of uncommon code sequences.
This approach can be implemented with no hardware
support, and results were presented which indicated that
it might achieve code size reductions of approximately
12%. With a minor modification of the hardware an
additional 4% code size reduction can also be achieved.
While this approach will result in an extremely high rate
of procedure calls, there is no discussion of the impact
of these calls on performance. This issue could be
particularly significant in the face of the tight code
scheduling constraints of their target machine, the Texas
Instruments TMS320C25 DSP.
Ernst et al investigated the use of byte-coded
machine language [6] for compression. This approach
hearkens back to the original goal of tightly encoded
ISA formats for CISC processors, and much of their
work is focused on minimizing the impact on
performance.
2. The Procedure Cache
While Wolfe et al. have developed an effective
technique for transparent code compression, their
approach has two specific features which may prove
disadvantageous. First, since the compression process
is transparent to the supervisor code as well as the
application, the entire decompression and translation
process must be implemented in hardware. While
dedicated decompression hardware has the benefit of
low overhead, there is no option to use the approach on
stock hardware. We are interested in schemes that can
leverage existing hardware, with the option (but not
necessity) of hardware accelerators. Secondly, the
mapping problem between compressed space and
program space complicates the hardware as well as the
linking process.
Pcache
Application Directory
ROM
Compression utilities
Compression Tables
Processor Core
ROM
Hardware Accelerator
Figure
1: Embedded system architecture for pcache
system
In place of dedicated hardware that transparently
code blocks, we propose using demand
driven decompression triggered by procedure
invocations. Procedures are decompressed as atomic
units, and are stored into a dedicated region of RAM
that is explicitly managed by the runtime system (Figure
1). This approach efficiently solves the problem of
address mapping for all references that are contained
within one procedure (e.g. loops and conditional code),
and experience shows that these are the most common
forms of branching. The remaining inter-procedure and
global references must be resolved through the use of a
directory service. We call this software cache the
Procedure Cache (pcache).
The pcache should be able to store any procedure
that is small enough to fit within it. As a result of this
goal the pcache algorithms must manage variable size
objects which may not be aligned on a boundary that is
convenient for addressing, unlike conventional
hardware caches which manage fixed size lines and
blocks. The issue of maximum procedure size is
problematic, and while a number of solutions present
themselves we have not decided yet upon a
recommended path.
2.1 Runtime Binding
Procedure calls are bound together at runtime by
consulting a directory service. A linking tool translates
each call into a request through a unique identifier. The
directory service looks up the location of the call target
and activates the target with the proper linkage needed
for the return operation. A table stored with each
program is used to translate procedure identifiers into
addresses in the compressed memory. As was the case
with the scheme from Wolfe et al., this table must be
generated when the program is linked.
The process used for runtime binding can be broken
down into the following stages:
1. Source invokes directory service with unique
identifier of the target procedure
2. If the target is in the pcache go to step 9
3. Find the target address in compressed
memory by consulting the directory service
4. If enough contiguous free space exists in the
pcache for the target go to step 8
5. If enough fragmented free space exists in the
pcache for the target go to step 7
6. Mark procedures for eviction until enough
space is available
7. Coalesce fragmented space into contiguous
block
8. Decompress target procedure into assigned
pcache location
9. Patch the state to allow the target to return to
the caller
10. Invoke the target procedure
The traditional execution environment binds one
procedure to another through a call instruction, which
typically executes one memory reference and updates
the program counter. Two call instructions are used in
the process above, one at step 1 and one at step 10. Let
l represent the time required to lookup the target
procedure identifier in the directory service data
structure in step 2. Let m represent the time required
for management, which should be relatively stable at
steady state, and let d t
required to
decompress procedure t . The pcache hit rate is
represented by h , and is significant in the calculation
of the expected case performance.
The worst-case execution time involves compacting
the free space and identifying procedures to replace,
followed by the time required to decompress the target
procedure. The worst case call time is
. In the expected case, i.e. that
of a cache hit, the call time is
. In this case it is
clearly important to increase the hit rate. However, in
the limiting case where the hit rate is high, the directory
scheme still imposes a cost of c l
on every procedure
invocation.
A better approach is to cache the address of the
target procedure at the call site, and then avoid the
directory service overhead for subsequent calls to the
target. A similar approach has been used in high
performance object-oriented runtimes systems to
speculatively bind method invocations to typed methods
[7]. The runtime directory binds call sites to targets
once and then patches the call site. Subsequent
invocations jump straight to the target, and then test the
runtime type information to determine if the processor
ended up at the right location. If the test succeeds the
execution continues and if it fails the directory service
is consulted. This approach works if the cached target
address is guaranteed to be the start of a valid code
sequence, which is difficult to guarantee when code
blocks move within the address space after they are
loaded. Such an approach would not work for the
pcache because cache replacement and compaction
make alignment restrictions prohibitively expensive.
Procedures need not be aligned on any standard
boundary, and thus the risk exists that a jump would end
up in the middle of a procedure or in free space.
An alternate approach, which we are advocating, is
to test the validity of the cached target address at the
call site. This scheme involves more test operations in
the pcache than does testing at the destination, since
each procedure has a single entry point but may call
multiple targets. In order to test at the call site, each
procedure must have a prologue that contains the
procedure identifier. Conceptually, the call site loads
the word that precedes the cached target address,
compares it to the target it wishes to invoke, and jumps
to the cached address on a match. This sequence
changes the best case invocation sequence from two
jumps and the directory service lookup to one load, a
test and a conditional jump. For a pcache with a large
number of procedures, and thus an expensive directory
service lookup, the cached target should result in a
performance improvement over the pure directory
scheme.
There is the possibility that some procedure will
have an identifier that corresponds to a legal code
sequence, introducing the danger of a false positive test.
This problem is avoided by introducing a tag byte that
identifies the word as a procedure identifier. The
specific tag byte to use depends on the processor
architecture, as it must correspond to an illegal opcode
and be placed at the proper word alignment.
Unfortunately, this approach is not foolproof for
processors with variable length instructions, such as the
Intel x86 and the 68k. In these cases it is not possible
to guarantee that the target identifier will not match
some code sequence or embedded data, though the
likelihood of this event can be reduced.
Tagging procedure identifiers also makes it easy to
implement a reference scheme in order to approximate
LRU data for pcache management. The runtime system
will periodically clear each of the tag bytes, thus forcing
the cached targets to fail and invoking the directory
service. For machines with 32-bit words the use of tag
bytes does reduce the procedure identifier space to 24
bits, but we feel that this range will prove more than
sufficient for the needs of embedded systems.
2.2 Procedure Returns
Return instructions are a bit complicated because the
traditional code sequence cannot explicitly name the
destination of the return operation. The pcache runtime
system solves this problem by storing three pieces of
data: the source procedure identifier, the predicted
address of the start of then return procedure (that is the
address that the caller had at the time it invoked the
active procedure), and the offset of the call site from the
start of the source procedure. The regular return
procedure is then replaced by a test of the predicted
prologue for the source, and a jump to that address plus
displacement in the event of success. A failure causes
the directory service to lookup (and possibly reload) the
destination of the return operation and then do a jump to
the address plus displacement.
2.3 Replacement Algorithms
The task of allocating space in the pcache for a new
procedure involves a two step process. First, the pcache
is searched for a free block that is large enough to
satisfy the new demand. We have experimented with
both best fit and first-fit for this stage, and have found
that both approaches produce similar results. All of the
results presented below will be with respect to first-fit,
because of the simplified implementation.
If there is enough free space available, but not in
any single block, then the runtime system must invoke
the pcache compactor. The pcache is compacted by
moving all of the live procedures to the start of the
pcache and all of the free fragments to the end of the
pcache.
In the event that the runtime system does not find
must invoke a replacement
algorithm to identify procedures that should be evicted
from the pcache. We have experimented with two
algorithms for replacement: least recently used (LRU)
and Neighbor. With LRU the runtime system scans an
LRU list and marks each procedure in sequence for
eviction, until enough space has been freed. Once this
is accomplished, the compactor is invoked to coalesce
the free space into a single block large enough for the
new procedure. While LRU is easy to implement and
conceptually simple, it will tend to cause a significant
amount of memory traffic in the pcache to coalesce the
fragmented free space. For example, consider the case
where together the two LRU procedures have enough
space to satisfy a new request but happen to reside in
the first and third quadrant of the pcache. The
subsequent compacting stage will need to move
approximately half of the pcache data in order to
combine the space freed up by evicting these two
procedures. It may be the case that the third LRU
procedure could be combined with one of the first two
and resides in an adjoining region of memory. In this
case, the compacting procedure is trivial, since no
intervening procedures need to be moved in the pcache.
While the primary benefit of such an optimization is
reducing the data movement in the pcache, a secondary
benefit is avoiding subsequent tag misses on the moved
procedures, and the resulting lookup events at the
directory service.
We have experimented with two schemes to reduce
the amount of data movement within the pcache in
response to pcache misses. The first scheme is called
Neighbor, and involves looking for sets of adjacent
procedures that are good candidates for eviction. Each
set of procedures is evaluated on the basis of the sum of
squares of the LRU values, where the least recently
used procedure has an LRU value of 1 and all other
increase sequentially. This approach is biased toward
avoiding those procedures that were used recently,
though it does not explicitly exclude them from
consideration. Neighbor scans the pcache for adjacent
blocks of memory that are large enough for the new
request, and considers both free and occupied space.
The algorithm then selects the set of blocks that have
the lowest sum of squares for the LRU values. A more
general form of Neighbor involves evaluating a function
F(S) for each set of neighbors, where F() is some
arbitrary function. The advantage of using a sum of
squares is that it is easy to evaluate at runtime and it
provides a strong bias against selecting the most
recently used procedures.
A modification of this approach is to use a limiting
term to cap the value for each set of adjacent blocks,
and exclude from consideration a set that exceeds this
limit. The goal of this approach is to make it
impossible to remove the most frequently used
procedures, even in those cases where one of these
procedures would otherwise be selected, e.g. when it
neighbors a large procedure which is the least
frequently used.
3. Experiments
3.1 Approach
Trace driven simulation is used to evaluate the
effectiveness of caching whole procedures. The pcache
simulator must know when each procedure is activated,
either through a directed call, the result of a return
operation or an asynchronous transfer (e.g. exception or
UNIX longjmp). The traces are collected with a special
augmented version of Lsim from the IMPACT
compilation system [8]. This tool allows us to
dynamically generate a large set of activation events,
with support for sophisticated trace sampling.
3.2 Applications
There currently exists a significant void with regards
to effective benchmarks for embedded systems. While
a number of industrial and academic efforts have been
proposed, to date there has been little progress towards
a suite of representative programs and workloads. One
part of the problem is that the field of embedded
systems covers an extremely wide range of computing
systems. It is difficult to imagine a benchmark suite
that would reveal useful information to the designers of
machines and cellular phones, because of the
drastically different uses for these products. While
there is some hope for emerging unification in the area
of information appliances, because of the more cohesive
focus to the devices, there currently are no options to
choose from. This unfortunate state of affairs is best
reflected by the continued use of the Dhrystones
benchmark, and the derivative metric Dhrystones per
milli-Watt.
For the purposes of this paper we have adopted a
number of programs from the MediaBench benchmark
suite [9]. Six additional programs have been selected,
five from the SPEC95 benchmark set along with the
Backwater basic interpreter.
Figure
2 and Figure 3 show the cumulative
distribution functions (CDF) of procedure sizes for
bwbasic and go, which represent the typical distribution
and worst case (widest spread) respectively. Data is
presented for both the static program image in memory,
as well as the dynamic distribution seen during
execution. This data suggests that a modest size pcache
will often succeed in capturing the working set. In
general the dynamic data exhibits a slightly slower
growth than the static data and show sharper breaks;
both phenomena are a result of the skewed distribution
of call frequency among the procedures.0.10.30.50.70.9Static
Dynamic
Figure
2: Procedure size distribution for bwbasic
3.3 Pcache Miss Rates
Table
1 shows the raw miss rates for LRU
replacement, while Table 2 presents the same data for
when Neighbor is used for replacement. Miss rates are
calculated by counting each reference generated by the
program, regardless of whether the procedure is actually
cacheable (i.e. smaller than the simulated pcache size).
The significance of low cache miss rates is much more
difficult to evaluate than for a traditional hardware
cache with fixed size objects. For example, there is a
significant difference between missing on the average
static procedure for go, which is under 1k, and missing
on the most frequent procedure which is over 12k.
Nevertheless, the simulator marks both as a single miss
event.0.10.30.50.70.9Static
Dynamic
Figure
3: Procedure size distribution for go
Several application, in particular the raw audio
encoder and decoder, achieve extremely low miss rates.
The general trends are for both LRU and Neighbor to
have very similar hit rates, with two notable exceptions.
Both djpeg and mpeg2enc show high miss rates with 1k
pcaches with both LRU and Neighbor. The rates stay
relatively high for 2k for Neighbor, while the drop for
LRU. In both cases there is a specific procedure which
is frequently called and which is captured by the LRU
dynamics, though not for Neighbor.
The procedure size CDF of go (Figure suggests
that the dynamic reference stream has a much larger
footprint than bwbasic (Figure 2), and thus it is not
surprising that bwbasic shows reduced miss rates for
comparable pcache sizes. On the other hand, a
relatively small pcache size (e.g. 16k or 32k) can still be
very effective.
The miss rate data displays an interesting result:
Neighbor generally achieves a lower miss rate than
LRU. While it is certainly possible for pathological or
pedagogical cases to exhibit behavior like this, in
general LRU is expected to achieve the highest hit rates.
This behavior is a consequence of the caching of
variable size objects.
Consider the case of a hardware cache with a fixed
block size. If a certain amount of space must be made
available and the entire cache is allocated, a specific
number of blocks must be evicted from the cache. For
such a cache LRU has proven to be effective at
providing the best guess for which blocks should be
evicted. With the software pcache used here the
number of procedures evicted from the pcache depends
upon the specific procedures selected. We explain this
result by considering three different types of pcaches:
small, medium and large. For a small pcache Neighbor
will tend to approach LRU performance because of the
smaller spread in LRU values between the least and the
most recently used procedures. For a large pcache
evictions will be rare so performance will approach the
compulsory miss regardless of the replacement
algorithm. For the medium case some number of
references will be satisfied by the available free space,
and for these the replacement algorithm is
inconsequential. Assume that the CDF of procedure
size is not correlated to LRU value 1 , and a new
procedure activation causes the runtime to invoke the
replacement algorithm. For half of these cases the size
of the newly activated procedure will be equal to or less
than the size of the LRU procedure, and thus both LRU
and Neighbor will select the LRU procedure. For the
remaining cases, the LRU algorithm will traverse the
list of least frequently used procedures and mark each
for eviction until the amount of space freed up is at least
equal to the new request. Neighbor, however, looks for
contiguous blocks that are good candidates according to
a specific cost function. By relying on the SQU(LRU)
cost function, Neighbor is biased against combining
multiple procedures into the best set.
Figure
4 illustrates this phenomenon. Assume that
three "units" of space must be freed up. The LRU
algorithm will choose the first three procedures on the
LRU list (procedures 23, 17 and 87) for eviction and
1 While this may not hold in all cases, we believe
that the assumption is valid in general.
then invoke the compactor to coalesce the space. On
the other hand, because Neighbor is using the square of
the LRU counts, it will select procedure 12 for
eviction 2 . We have found that, in general, for those
cases where LRU and Neighbor select different sets of
procedures for replacement Neighbor tends to select
fewer procedures. Continuing the example in Figure 4,
while it may be a good idea to evict either procedure 17,
or 87 before procedure 12, it seems intuitive that it
would not be best to evict all three rather than 12.
Proc 12: LRU: 4: Cost:
Proc
Proc 23: LRU: 3: Cost: 9
Proc 35: LRU: 12: Cost: 144
Proc 87: LRU: 2: Cost: 4 Selected by
LRU
Selected by
Neighbor
Figure
4: Example: LRU chooses "optimal" set
while Neighbor evicts fewer procedures
The pcache miss rate for gcc is a direct result of the
dynamic program size CDF. The most frequently used
procedure in gcc is over 12k bytes and corresponds to
8% of the procedure activations, while the second most
common procedure is 52 bytes and corresponds to 2%
of the procedure activations. Although procedures that
exceed the pcache size are excluded they still contribute
to the count of procedure activations: thus the gcc
simulations have low hit rates for pcaches below 64k
bytes. An interesting phenomenon may occur when one
of these large and common procedures is finally
admitted to the pcache. The impact of introducing the
large procedure into the pcache causes LRU to evict a
huge number of procedures, while as was just discussed
Neighbor tends to evict fewer. The resulting impact on
pcache miss rate for LRU can be dramatic, as seen for
gsmencode between 8k and 16k.
3.4 Compacting Events
Table
3 and Table 4 show the rate of compaction
events per procedure activation, rather than raw event
counts, in order to make the presentation consistent
across the test programs. This data illustrates the
effectiveness of the Neighbor algorithm. While both
Neighbor and LRU achieve roughly comparable results
for pcache hit rates, Neighbor is much more effective at
reducing the number of compacting events. For this
data, the Neighbor algorithm generally produces a much
2 Note that the position of procedure 35 (with a high
LRU) in the cache blocks Neighbor from merging
procedures 17 and 35 along with the free space adjacent
to 35.
flatter response as a function of pcache size. This
performance is a consequence of Neighbor compacting
the pcache only when there is already enough free space
and no procedure eviction is required. The amount of
expected free space in a pcache is not monotonic with
pcache size, but rather is a complicated function of the
size of dynamic references. This fact is illustrated for
pcache size of 8k for go, illustrating that the rate of
compacting events can actually increase in response to
an increase in pcache size. Again, gcc shows a sharp
response soon after the admission of the 12k procedure.
126.gcc
cjpeg
djpeg
gsmdecode
mipmap
pgpdecode
rasta
rawdaudio
unepic
Figure
5: Performance impact of pcache operations,
relative to the base case
4. Performance
The pcache structure will reduce performance due to
three types of events: cache management including
LRU management and directory service, memory
movement due to compaction events within the pcache,
and decompression of data transferred from memory.
The impact of these factors was evaluated, and the
resulting performance is shown in Figure 5. Because of
the large volume of memory traffic into and within the
pcache, the management component has comparatively
little impact on performance. Each byte of memory
moved within the pcache due to compaction was
charged half of a clock cycle, assuming that two 32-bit
memory operations are required and loop unrolling can
be used to hide loop management overhead. The
compression technique used is based on an algorithm
that requires an average of 22 cycles on a SPARC to
compress each byte of SPARC binaries and achieves a
60% compression ratio [10]. However, a number of
algorithms could be selected to balance the demands of
performance against compression. In particular, an
application of Huffman coding (in hardware) will be
briefly discussed later.
The average slowdown for all of the applications is
166% with a 64k-byte pcache. However, when go and
gcc are excluded from the pcache, the average
slowdown is only 11%. These numbers climb to 600%
and 36% for 32k-byte caches. Clearly, it is important to
exclude ill-behaved applications from the pcache, but
this problem is easy to manage with an embedded
system where the software is generally more highly
tuned to the execution environment.
5. Discussion
While there is a benefit to considering schemes that
require no additional hardware, the pcache architecture
can still take advantage of hardware acceleration if
available. A number of researchers have designed
hardware for instruction decompression. A particularly
good example is [11], which provides 480Mb/s
Huffman decompression on a Mips instruction stream.
This device requires 1 cm 2 in 0.8-micron process
technology. When used with hardware decompression,
the pcache is still effective at increasing compression
rates, by increasing significantly increasing the block
size. Furthermore, the dictionary size is reduced, since
instead of having an entry for every possible jump
target there is only an entry for each subroutine.
It is impossible to say how the pcache will interact
with traditional hardware caches without discussing the
specific hardware configuration. If the pcache memory
is nearby on high-speed SRAM the cache should ignore
it, as transparent caching in hardware will provide no
direct latency benefit while consuming valuable
resources. On the other hand, if the pcache memory is
relatively slow it should be cached by the hardware as
well. Almost all sophisticated embedded processors
include memory control hardware for functions such as
chip-select, wait-state insertion and bus sizing. This
hardware should be augmented to allow blocks of
memory to become non-cacheable, which is a common
feature in high-performance processors.
6. Conclusions
We have presented a new approach to applying
compression to stored program images. This technique
can easily reduce the program storage by approximately
40%, which can correspond to a significant cost
reduction for embedded products targeted to the
consumer market. By compressing complete
procedures, rather than smaller sub-blocks, we are able
to avoid the cost of dedicated hardware. The resulting
performance impact has been measured to be
approximately 10% for a wide range of sophisticated
embedded applications.
Trace driven simulation has been used to evaluate
the opportunity of using compression and the associated
tradeoff points. The results suggest that a small
software controlled cache, perhaps 16k bytes of
standard SRAM, can be effective at caching the
working set and reducing dynamic memory traffic. The
additional effect of compressing traffic on the system
bus further decreases main memory traffic, and thus
helps to attack the problem of power consumption.
--R
"Combining the Concepts of Compression and Caching for a Two-Level Filesystem,"
"Executing Compressed Programs on an Embedded RISC Architecture,"
"Compression of Embedded System Programs,"
"A Method for the Construction of Minimum Redundancy Codes,"
"Code Density Optimization for Embedded DSP Processors Using Data Compression Techniques,"
"Code Compression,"
"Efficient Implementation of the Smalltalk-80 System,"
"IMPACT: An Architectural Framework for Multiple-Instruction- Issue Processors,"
A Tool for Evaluating Multimedia and Communications Systems,"
"An Extremely Fast Ziv-Lempel Data Compression Algorithm,"
"A High-Speed Asynchronous Decompression Circuit for Embedded Processors,"
--TR
Combining the concepts of compression and caching for a two-level filesystem
IMPACT
Executing compressed programs on an embedded RISC architecture
Code compression
Compression of Embedded System Programs
Code density optimization for embedded DSP processors using data compression techniques
A High-Speed Asynchronous Decompression Circuit for Embedded Processors
Efficient implementation of the smalltalk-80 system
--CTR
Israel Waldman , Shlomit S. Pinter, Profile-driven compression scheme for embedded systems, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Saumya Debray , William S. Evans, Cold code decompression at runtime, Communications of the ACM, v.46 n.8, August
Youtao Zhang , Jun Yang , Rajiv Gupta, Frequent value locality and value-centric data cache design, ACM SIGPLAN Notices, v.35 n.11, p.150-159, Nov. 2000
Youtao Zhang , Jun Yang , Rajiv Gupta, Frequent value locality and value-centric data cache design, ACM SIGOPS Operating Systems Review, v.34 n.5, p.150-159, Dec. 2000
Stacey Shogan , Bruce R. Childers, Compact Binaries with Code Compression in a Software Dynamic Translator, Proceedings of the conference on Design, automation and test in Europe, p.21052, February 16-20, 2004
G. Hallnor , Steven K. Reinhardt, A compressed memory hierarchy using an indirect index cache, Proceedings of the 3rd workshop on Memory performance issues: in conjunction with the 31st international symposium on computer architecture, p.9-15, June 20-20, 2004, Munich, Germany
Keith D. Cooper , Nathaniel McIntosh, Enhanced code compression for embedded RISC processors, ACM SIGPLAN Notices, v.34 n.5, p.139-149, May 1999
Jun Yang , Youtao Zhang , Rajiv Gupta, Frequent value compression in data caches, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.258-265, December 2000, Monterey, California, United States
Marc L. Corliss , E. Christopher Lewis , Amir Roth, A DISE implementation of dynamic code decompression, ACM SIGPLAN Notices, v.38 n.7, July
Charles Lefurgy , Eva Piccininni , Trevor Mudge, Evaluation of a high performance code compression method, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.93-102, November 16-18, 1999, Haifa, Israel
Bita Gorjiara , Daniel Gajski, FPGA-friendly code compression for horizontal microcoded custom IPs, Proceedings of the 2007 ACM/SIGDA 15th international symposium on Field programmable gate arrays, February 18-20, 2007, Monterey, California, USA
O. Ozturk , H. Saputra , M. Kandemir , I. Kolcu, Access Pattern-Based Code Compression for Memory-Constrained Embedded Systems, Proceedings of the conference on Design, Automation and Test in Europe, p.882-887, March 07-11, 2005
Susan Cotterell , Frank Vahid, Synthesis of customized loop caches for core-based embedded systems, Proceedings of the 2002 IEEE/ACM international conference on Computer-aided design, p.655-662, November 10-14, 2002, San Jose, California
Susan Cotterell , Frank Vahid, Tuning of loop cache architectures to programs in embedded system design, Proceedings of the 15th international symposium on System Synthesis, October 02-04, 2002, Kyoto, Japan
Oliver Rthing , Jens Knoop , Bernhard Steffen, Sparse code motion, Proceedings of the 27th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p.170-183, January 19-21, 2000, Boston, MA, USA
Marc L. Corliss , E. Christopher Lewis , Amir Roth, The implementation and evaluation of dynamic code decompression using DISE, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.1, p.38-72, February 2005
Guilin Chen , Mahmut Kandemir, Optimizing Address Code Generation for Array-Intensive DSP Applications, Proceedings of the international symposium on Code generation and optimization, p.141-152, March 20-23, 2005
Milenko Drini , Darko Kirovski , Hoi Vo, Code optimization for code compression, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Bjorn De Sutter , Bruno De Bus , Koen De Bosschere, Sifting out the mud: low level C++ code reuse, ACM SIGPLAN Notices, v.37 n.11, November 2002
Milenko Drini , Darko Kirovski , Hoi Vo, PPMexe: Program compression, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.1, p.3-es, January 2007
Bjorn De Sutter , Ludo Van Put , Dominique Chanet , Bruno De Bus , Koen De Bosschere, Link-time compaction and optimization of ARM executables, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.1, February 2007
rpd Beszdes , Rudolf Ferenc , Tibor Gyimthy , Andr Dolenc , Konsta Karsisto, Survey of code-size reduction methods, ACM Computing Surveys (CSUR), v.35 n.3, p.223-267, September
Bjorn De Sutter , Bruno De Bus , Koen De Bosschere, Link-time binary rewriting techniques for program compaction, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.5, p.882-945, September 2005 | pagers;run-time performance overhead;procedural reference resolution;multimedia applications;procedure-based program compression;compressed memory;cached procedures;battery life;power consumption;high-capacitance bus traffic;consumer devices;embedded systems;RAM;memory references;performance requirements;source coding;transparent program compression;smart telephones;design factors;directory structure;memory reduction;integrated data services;communications applications;personal digital assistants |
266821 | Improving the accuracy and performance of memory communication through renaming. | As processors continue to exploit more instruction-level parallelism, a greater demand is placed on reducing the effects of memory access latency. In this paper, we introduce a novel modification of the processor pipeline called memory renaming. Memory renaming applies register access techniques to load instructions, reducing the effect of delays caused by the need to calculate effective addresses for the load and all preceding stores before the data can be fetched. Memory renaming allows the processor to speculatively fetch values when the producer of the data can be reliably determined without the need for an effective address. This work extends previous studies of data value and dependence speculation. When memory renaming is added to the processor pipeline, renaming can be applied to 30% to 50% of all memory references, translating to an overall improvement in execution time of up to 41%. Furthermore, this improvement is seen across all memory segments-including the heap segment, which has often been difficult to manage efficiently. | Introduction
Two trends in the design of microprocessors combine
to place an increased burden on the implementation
of the memory system: more aggressive, wider
instruction issue and higher clock speeds. As more instructions
are pushed through the pipeline per cycle,
there is a proportionate increase in the processing of
memory operations - which account for approximately
1/3 of all instructions. At the same time, the gap between
processor and DRAM clock speeds has dramatically
increased the latency of the memory operations.
Caches have been universally adopted to reduce the average
memory access latency. Aggressive out-of-order
pipeline execution along with non-blocking cache designs
have been employed to alleviate some of the remaining
latency by bypassing stalled instructions.
Unfortunately, instruction reordering is complicated
by the necessity of calculating effective addresses in
memory operations. Whereas, dependencies between
register-register instructions can be identified by examining
the operand fields, memory dependencies cannot
be determined until much later in the pipeline (when
the effective address is calculated). As a result, mechanisms
specific to loads and stores (e.g., the MOB in the
Pentium Pro [1]) are required to resolve these memory
dependencies later in the pipeline and enforce memory
access semantics. To date, the only effective solution
for dealing with ambiguous memory dependencies
requires stalling loads until no earlier unknown store
address exists. This approach is, however, overly conservative
since many loads will stall awaiting the addresses
of stores that they do not depend on, resulting
in increased load instruction latency and reduced program
performance. This paper proposes a technique
called memory renaming that effectively predicts memory
dependencies between store and load instructions,
allowing the dynamic instruction scheduler to more accurately
determine when loads should commence execution
In addition to reordering independent memory ref-
erences, further flexibility in developing an improved
dynamic schedule can be achieved through our tech-
nique. Memory renaming enables load instructions to
retrieve data before their own effective address have
been calculated. This is achieved by identifying the relationship
between the load and the previous store instruction
that generated the data. A new mechanism is
then employed which uses an identifier associated with
the store-load pair to address the value, bypassing the
normal addressing mechanism. The term memory renaming
comes from the similarity this approach has
to the abstraction of operand specifiers performed in
register renaming. [7].
In this paper, we will examine the characteristics of
the memory reference stream and propose a novel architectural
modification to the pipeline to enable speculative
execution of load instructions early in the pipeline
(before address calculation). By doing this, true dependencies
can be eliminated, in particular those true
dependencies supporting the complex address calculations
used to access the program data. This will be
shown to have a significant impact on overall performance
(as much as 41% speedup for the experiments
presented).
The remainder of this paper is organized as follows:
Section 2 examines previous approaches to speculating
load instructions. Section 3 introduces the memory
reordering approach to speculative load execution and
evaluates the regularity of memory activity in order
to identify the most successful strategy in executing
loads speculatively. In section 4, we show one possible
integration of memory renaming into an out-of-order
pipeline implementation. Section 5 provides performance
analysis for a cycle level simulation of the tech-
niques. In section 6 we state conclusions and identify
future research directions for this work.
Background
A number of studies have targeted the reduction of
memory latency. Austin and Sohi [2] employed a sim-
ple, fast address calculation early in the pipeline to
effectively hide the memory latency. This was achieved
by targeting the simple base+offset addressing modes
used in references to global and stack data.
Dahl and O'Keefe [5] incorporated address bits associated
with each register to provide a hardware mechanism
to disambiguate memory references dynamically.
This allowed the compiler to be more aggressive in placing
frequently referenced data in the register file (even
when aliasing may be present), which can dramatically
reduce the number of memory operations that must be
executed.
Lipasti and Shen [8] described a mechanism in which
the value of a load instruction is predicted based on
the previous values loaded by that instruction. In their
work, they used a load value prediction unit to hold the
predicted value along with a load classification table for
deciding whether the value is likely to be correct based
on past performance of the predictor. They observed
that a large number of load instructions are bringing in
the same value time after time. By speculatively using
the data value that was last loaded by this instruction
before all dependencies are resolved, they are able to
remove those dependencies from the critical path (when
speculation was accurate). Using this approach they
were able to achieve a speedup in execution of between
3% (for a simple implementation) to 16% (with infinite
resources and perfect prediction).
Sazeides, Vassiliadis and Smith [10] used address
speculation on load instructions to remove the dependency
caused by the calculation of the effective address.
This enables load instructions to proceed speculatively
without their address operands when effective address
computation for a particular load instruction remains
constant (as in global variable references).
Finally, Moshovos, Breach, Vijaykumar and Sohi [9]
used a memory reorder buffer incorporating data dependence
speculation. Data dependence speculation
allows load instructions to bypass preceding stores before
ambiguous dependencies are resolved; this greatly
increases the flexibility of the dynamic instruction
scheduling to find memory instruction ready to exe-
cute. However, if the speculative bypass violates a true
dependency between the load and store instructions in
flight, the state of the machine must be restored to the
point before the load instruction was mis-speculated
and all instructions after the load must be aborted. To
reduce the number of times a mis-speculation occurs,
a prediction confidence circuit was included controlling
when bypass was allowed. This confidence mechanism
differs from that used in value prediction by locating
dependencies between pairs of store and load instructions
instead of basing the confidence on the history of
the load instruction only. When reference prediction
was added to the Multiscalar architecture, execution
performance was improved by an average of 5-10%.
Our approach to speculation extends both value prediction
and dependence prediction to perform targeted
speculation of load instructions early in the architectural
pipeline.
Renaming Memory Operations
Memory renaming is an extension to the processor
pipeline designed to initiate load instructions as early
as possible. It combines dependence prediction with
value prediction to achieve greater performance than
possible with either technique alone. The basic idea
behind memory renaming is to make a load instruction
look more like a register reference and thereby process
the load in a similar manner. This is difficult to
achieve because memory reference instructions, unlike
the simple operand specifiers used to access a register,
require an effective address calculations before dependence
analysis can be performed. To eliminate the need
to generate an effective address from the critical path
for accessing load data, we perform a load speculatively,
using the program counter (PC) of the load instruction
as an index to retrieve the speculative value. This
is similar to the approach used in Lipasti and Shen's
value prediction except that in memory renaming this
is performed indirectly; when a load instruction is en-
countered, the PC address associated with that load
(LDPC) is used to index a dependence table (called the
store-load cache) to determine the likely producer of the
data (generally a store instruction). When it is recog-
Value File
Store/Load
Cache
Speculative
Load Data
Figure
1: Support for Memory Renaming.
nized that the data referenced by a load instruction is
likely produced by a single store instruction, the data
from the last execution of that store can be retrieved
from a value file. Accessing this value file entry can
be performed (speculatively) without the need to know
the effective address of the load or the store instruction,
instead the value file is indexed by a unique identifier
associated with the store-load pairing (this mechanism
will be described in the next section). Store instructions
also use the store-load cache to locate entries in
the value file which are likely to be referenced by future
loads. A store and load instruction pair which are
determined to reference the same locations will map to
the same value file index. Figure 1 shows an overview
of the memory renaming mechanism.
This approach has the advantage that when a pro-
ducer/consumer relationship exists, the load can proceed
very early in the pipeline. The effective address
calculation will proceed as usual, as will the memory
operation to retrieve the value from the Dcache; this
is done to validate the speculative value brought in
from the value file. If the data loaded from the Dcache
matches that from the value file, then speculation was
successful and processing continues unfettered. If the
values differ, then the state of the processor must be
corrected.
In order for memory renaming to improve processor
performance, it may be prudent to include a prediction
confidence mechanism capable of identifying when
speculation is warranted. As in value prediction and
dependence prediction, we use a history based scheme
to identify when speculation is likely to succeed. Unlike
value renaming, we chose to identify stable store-load
pairings instead of relying on static value references. To
see the advantage of using a producer/consumer relationship
as a confidence mechanism, analysis is shown
for some of the SPEC95 benchmarks.
All programs were compiled with GNU GCC (ver-
sion 2.6.2), GNU GAS (version 2.5), and GNU GLD
(version 2.5) with maximum optimization (-O3) and
loop unrolling enabled (-funroll-loops). The Fortran
codes were first converted to C using AT&T F2C version
1994.09.27. All experiments were performed on
an extended version of the SimpleScalar [3] tool set.
The tool set employs the SimpleScalar instruction set,
which is a (virtual) MIPS-like architecture [6].
Table
1: Benchmark Application Descriptions
Bench- Instr Loads Value Addr Prod
mark (Mil.) (Mil.) Locality Loc. Loc.
go 548 157 25 %
28 % 62 %
gcc 264 97
compress 3.5 1.3 15 % 37 % 50 %
li 956 454 24 % 23 % 55 %
tomcatv 2687 772 43 % 48 % 66 %
su2cor
hydro2d 967 250
mgrid 4422 1625 42 %
There are several approaches to improving the processing
of memory operations by exploiting regularity
in the reference stream. Regularity can be found in the
stream of values loaded from memory, in the effective
address calculations performed and in the dependence
chains created between store and load instructions. Table
1 shows the regularity found in these differing characteristics
of memory traffic. The first three column
show the benchmark name, the total number of instructions
executed and the total number of loads. The forth
column shows the percentage of load executions of con-
stant, or near constant values. The percentage shown
is how often a load instruction fetches the same data
value in two successive executions. (this is a measure
of value locality). As shown in the table, a surprising
number of load instruction executions bring in the same
values as the last time, averaging 29% for SPEC integer
benchmarks and 44% for SPECfp benchmarks. While
it is surprising that so much regularity exists in value
reuse, these percentages cover only about a third of all
loads. Column 5 shows the percentage of load executions
that reference the same effective address as last
time. This shows about the same regularity in effective
address reuse. The final column shows the percentage
of time that the producer of the value remains unchanged
over successive instances of a load instruction
- this means that the same store instruction generated
the data for the load. Here we see that this relationship
is far more stable - even when the values transferred
change, or when a different memory location is used
for the transfer, the relationship between the sourcing
store and the load remains stable. These statistics led
us to the use of dependence pairings between store and
load instructions to identify when speculation would be
most profitable.
4 Experimental Pipeline Design
To support memory renaming, the pipeline must be
extended to identify store/load communication pairs,
promote their communications to the register communication
infrastructure, verify speculatively forwarded
values, and recover the pipeline if the speculative
store/load forward was unsuccessful. In the following
text, we detail the enhancements made to the baseline
out-of-order issue processor pipeline. An overview of
the extensions to the processor pipeline and load/store
queue entries is shown in Figure 2.
4.1 Promoting Memory Communication
to Registers
The memory dependence predictor is integrated into
the front end of the processor pipeline. During de-
code, the store/load cache is probed (for both stores
and loads) for the index of the value file entry assigned
to the dependence edge. If the access hits in
the store/load cache, the value file index returned is
propagated to the rename stage. Otherwise, an entry
is allocated in the store/load cache for the instruction.
In addition, a value file entry is allocated, and the index
of the entry is stored in the store/load cache. It may
seem odd to allocate an entry for a load in the value file,
however, we have found in all our simulations that this
is a beneficial optimization; it promotes constants and
rarely stored variables into the value file, permitting
these accesses to also benefit from faster, more accurate
communication and synchronization. In addition,
the decode stage holds confidence counters for renamed
loads. These counters are incremented for loads when
their sourcing stores are predicted correctly, and decremented
(or reset) when they are predicted incorrectly.
In the rename stage of the pipeline, loads use the
value file index, passed in from the decode stage, to
access an entry in the value file. The value file returns
either the value last stored into the predicted dependence
edge, or if the value is in the process of being
computed (i.e. in flight), a load/store queue reservation
station index is returned. If a reservation station
index is returned, the load will stall until the sourcing
store data is written to the store's reservation station.
When a renamed load completes, it broadcasts its result
to dependent instructions; the register and memory
scheduler operate on the speculative load result as
before without modification.
All loads, speculative or otherwise, access the memory
system. When a renamed load's value returns from
the memory system, it is compared to the predicted
value. If the values do not match, a load data misspeculation
has occurred and pipeline recovery is initiated
Unlike loads, store instructions do not access the
value file until retirement. At that time, stores deposit
their store data into the value file and the memory sys-
tem. Later renamed loads that reference this value will
be able to access it directly from the value file. No
attempt is made to maintain coherence between the
value file and main memory. If their contents diverge
(due to, for example, external DMAs), the pipeline will
continue to operate correctly. Any incoherence will be
detected when the renamed load values are compared
to the actual memory contents.
The initial binding between stores and loads is created
when a load that was not renamed references the
data produces by a renamed store. We explored two
approaches to detecting these new dependence edges.
The simplest approach looks for renamed stores that
forward to loads in the load/store queue forwarding
network (i.e., communications between instructions in
flight). When these edges are detected, the store/load
cache entry for loads is updated accordingly. A slightly
more capable approach is to attach value file indices to
renamed store data, and propagate these indices into
the memory hierarchy. This approach performs better
because it can detect longer-lived dependence edges,
however, the extra storage for value file indices makes
the approach more expensive.
4.2 Recovering from Mis-speculations
When a renamed load injects an incorrect value
into the program computation, correct program execution
requires that, minimally, all instructions that
used the incorrect value and dependent instructions be
re-executed. To this end, we explored two approaches
to recovering the pipeline from data mis-speculations:
squash and re-execution recovery. The two approaches
exhibit varying cost and complexity - later we will see
that lower cost mis-speculation recovery mechanisms
enable higher levels of performance, since they permit
the pipeline to promote more memory communication
into the register infrastructure.
Squash recovery, while expensive in performance
penalty, is the simplest approach to implement. The
approach works by throwing away all instructions after
a mis-speculated load instruction. Since all dependent
instructions will follow the load instruction, the restriction
that all dependent instructions be re-executed
will indeed be met. Unfortunately, this approach can
Decode
Rename Schedule
Writeback
Load/Store Queue Entry
store/load cache
value id conf.
to rename
to writeback
value file
LSQ entry LRU (opt)
value
id
to schedule
to LRU logic
resv stations
all loads
and
stores
other insts speculative forwards
resv stations
load/store
queue
ld
pred
ldd
st
std
load/store data (STD/LDD) faults
load/store address value file index
load/store
queue
ld
pred
ldd
st
std
speculative
STD forward
result
non-speculative
LDD (from LSQ or mem)
if (pred != non-spec),
then recover
load/store addrs
Commit
load/store
queue
ld
pred
ldd
st
std
to memory
system
to value
file
Figure
2: Pipeline Support for Memory Renaming. Shown are the additions made to the baseline pipeline to support memory
renaming. The solid edges in the writeback stage represent forwarding through the reservation stations, the dashed lines represent
forwarding through the load/store queue. Also shown are the fields added (shown in gray) to the instruction re-order buffer entries.
throw away many instructions independent of the mis-
speculated load result, requiring many unnecessary re-
executions. The advantage of this approach is that it
requires very little support over what is implemented
today. Mis-speculated loads may be treated the same
as mis-speculated branches.
Re-execution recovery, while more complex, has significantly
lower cost than squash recovery. The approach
leverages dependence information stored in the
reservation stations of not-yet retired instruction to
permit re-execution of only those instructions dependent
on a speculative load value. The cost of this approach
is added pipeline complexity.
We implemented re-execution by injecting the correct
result of mis-speculated loads onto the result bus
- all dependent instructions receiving the correct load
result will re-execute, and re-broadcast their results,
forcing dependent instructions to re-execute, and so on.
Since it's non-trivial for an instruction to know how
many of its operands will be re-generated through re-
execution, an instruction may possibly re-execute multiple
times, once for every re-generated operand that
arrives. In addition, dependencies through memory
may require load instructions to re-execute. To accommodate
these dependencies, the load/store queue
also re-checks memory dependencies of any stores that
re-execute, re-issuing any dependent load instructions.
Additionally, loads may be forced to re-execute if they
receive a new address via instruction re-execution. At
retirement, any re-executed instruction will be the oldest
instruction in the machine, thus it cannot receive
any more re-generated values, and the instruction may
be safely retired. In Section 5, we will demonstrate
through simulation that re-execution is a much less expensive
approach to implementing load mis-speculation
recovery.
5 Experimental Evaluation
We evaluated the merits of our memory renaming
designs by extending a detailed timing simulator to
support the proposed designs and by examining the
performance of programs running on the extended sim-
ulator. We varied the the confidence mechanism, misspeculation
recovery mechanism, and key system parameters
to see what affect these parameters had on
performance.
5.1 Methodology
Our baseline simulator is detailed in Table 2. It
is from the SimpleScalar simulation suite (simulator
sim-outorder) [3]. The simulator executes only user-level
instructions, performing a detailed timing simulation
of an 4-way superscalar microprocessor with
two levels of instruction and data cache memory. The
simulator implements an out-of-order issue execution
model. Simulation is execution-driven, including execution
down any speculative path until the detection of
a fault, TLB miss, or mis-prediction. The model employs
a 256 entry re-order buffer that implements re-named
register storage and holds results of pending in-
structions. Loads and stores are placed into a 128 entry
load/store queue. In the baseline simulator, stores execute
when all operands are ready; their values, if spec-
ulative, are placed into the load/store queue. Loads
may execute when all prior store addresses have been
computed; their values come from a matching earlier
store in the store queue (i.e., a store forward) or from
the data cache. Speculative loads may initiate cache
misses if the address hits in the TLB. If the load is
subsequently squashed, the cache miss will still com-
plete. However, speculative TLB misses are not per-
mitted. That is, if a speculative cache access misses
in the TLB, instruction dispatch is stalled until the instruction
that detected the TLB miss is squashed or
committed. Each cycle the re-order buffer issues up
to 8 ready instructions, and commits up to 8 results
in-order to the architected register file. When stores
are committed, the store value is written into the data
cache. The data cache modeled is a four-ported 32k
two-way set-associative non-blocking cache.
We found early on that instruction fetch bandwidth
was a critical performance bottleneck. To mitigate this
problem, we implemented a limited variant of the collapsing
buffer described in [4]. Our implementation
supports two predictions per cycle within the same
instruction cache block, which provides significantly
more instruction fetch bandwidth and better pipeline
resource utilization.
When selecting benchmarks, we looked for programs
with varying memory system performance, i.e., programs
with large and small data sets as well as high
and low reference locality. We analyzed 10 programs
from the SPEC'95 benchmark suite, 6 from the integer
codes and 4 from the floating point suite.
All memory renaming experiments were performed
with a 1024 entry, 2-way set associative store/load
cache and a 512 entry value file with LRU replacement.
To detect initial dependence edge bindings, we propagate
the value file indices of renamed store data into
the top-level data cache. When loads (that were not
renamed) access renamed store data, the value file index
stored in the data cache is used to update the load's
store/load cache entry. 1
5.2 Predictor Performance
Figure
3 shows the performance of the memory dependence
predictor. The graph shows the hit rate of
1 Due to space restrictions, we have omitted experiments
which explore predictor performance sensitivity to structure
sizes. The structure sizes selected eliminates most capacity problems
in the predictor, allowing us to concentrate on how effectively
we can leverage the predictions to improve program performance
10305070CC1 Comp Go
Hydro-2D Mgrid Su2Cor Tomcatv
Hit
Rate
Hit Rate
Figure
3: Memory Dependence Predictor Performance.
the memory dependence predictor for each benchmark,
where the hit rate is computed as the number of loads
whose sourcing store value was correctly identified after
probing the value file. The predictor works quite
well, predicting correctly as many as 76% of the pro-
gram's memory dependencies - an average of 62% for
all the programs. Unlike many of the value predictor
mechanisms [8], dependence predictors work well, even
better, on floating point programs.
To better understand where the dependence predictor
was finding its dependence locality, we broke down
the correct predictions by the segment in which the reference
data resided. Figure 4 shows the breakdown of
correct predictions for data residing in the global, stack,
and heap segments. A large fraction of the correct dependence
predictions, as much as 70% for Mgrid and
41% overall on the average, came from stack references.
This result is not surprising considering the frequency
of stack segment references and their semi-static na-
ture, i.e., loads and stores to the stack often reference
the same variable many times. (Later we leverage this
property to improve the performance of the confidence
mechanisms.) Global accesses also account for many
of the correct predictions, as much as 86% for Tomcatv
and 43% overall on the average. Finally, a significant
number of correct predictions come from the heap seg-
ment, as much as 40% for Go and 15% overall on the
average. To better understand what aspects of the program
resulted in these correct predictions, we profiled
top loads and then examined their sourcing stores, we
found a number of common cases where heap accesses
exhibited dependence locality
These examples are typical of program constructs that challenge
even the most sophisticated register allocators. As a result,
only significant advances in compiler technology will eliminate
these memory accesses. The same assertion holds for global ac-
Fetch Interface fetches any 4 instructions in up to two cache block per cycle, separated by at most two branches
Instruction Cache 32k 2-way set-associative, latency
Branch Predictor 8 bit global history indexing a 4096 entry pattern history table (GAp [11]) with 2-bit
saturating counters, 8 cycle mis-prediction penalty
Out-of-Order Issue out-of-order issue of up to 8 operations per cycle, 256 entry re-order buffer, 128 entry
Mechanism load/store queue, loads may execute when all prior store addresses are known
Architected Registers floating point
Functional Units 8-integer ALU, 4-load/store units, 4-FP adders, 1-integer MULT/DIV, 1-FP MULT/DIV
Functional Unit Latency integer ALU-1/1, load/store-2/1, integer MULT-3/1, integer DIV-12/12, FP adder-2/1
Data Cache 32k 2-way set-associative, write-back, write-allocate, latency
four-ported, non-blocking interface, supporting one outstanding miss per physical register
4-way set-associative, unified L2 cache, 64 byte blocks,
Virtual Memory 4K byte pages, fixed TLB miss latency after earlier-issued instructions complete
Table
2: Baseline Simulation Model.
10%
20%
30%
40%
50%
70%
80%
90%
100%
CC1 Comp Go
Hydro-2D Mgrid Su2Cor Tomcatv
Breakdown
by
Segment
Global Stack Heap
Figure
4: Predictor Hits by Memory Segment.
repeated accesses to aliased data, which cannot be
allocated to registers
ffl accesses to loop data with a loop dependence distance
of one 3
ffl accesses to single-instance dynamic storage, e.g., a
variable allocated at the beginning of the program,
pointed to by a few, immutable global pointers
As discussed in Section 3, a pipeline implementation
can also benefit from a confidence mechanism. Figure
5 shows the results of experiments exploring the efficacy
of attaching confidence counters to load instruc-
cesses, all of which the compiler must assume are aliased. Stack
accesses on the other hand, can be effectively register allocated,
thereby eliminating the memory accesses, given that the processor
has enough registers.
3 Note that since we always predict the sourcing store to be
that last previous one, our predictors will not work with loop
dependence distances greater than one, even if they are regular
accesses. Support for these cases are currently under investigation
tions. The graphs show the confidence and coverage
for a number of predictors. Confidence is the success
rate of high-confidence loads. Coverage is the fraction
of correctly predicted loads, without confidence, covered
by the high-confidence predictions of a particular
predictor. Confidence and coverage are shown for 6 pre-
dictors. The notation used for each is as follows: XYZ,
where X is the count that must be reached before the
predictor considers the load a high-confidence load. By
default, the count is incremented by one when the predictor
correctly predicts the sourcing store value, and
reset to zero when the predictor fails. Y is the count
increment used when the opcode of the load indicates
an access off the stack pointer. Z is the count increment
used when the opcode of the load indicates an
access off the global pointer. Our analyses showed that
stack and global accesses are well behaved, thus we can
increase coverage, without sacrificing much confidence,
by incrementing their confidence counters with a value
greater than one.
As shown in Figure 5, confidence is very high for
the configurations examined, as much as 99.02% for
Hydro-2D and at least 69.22% for all experiments that
use confidence mechanisms. For most of the experiments
we tried, increasing the increments for stack and
global accesses to half the confidence counter performed
best. While this configuration usually degrades confidence
over the baseline case (an increment of one for
all accesses), coverage is improved enough to improve
program performance. Coverage varies significantly, a
number of the programs, e.g., Compress and Hydro-
2D, have very high coverage, while others, such as CC1
and Perl do not gain higher coverage until a significant
amount of confidence is sacrificed. Another interesting
feature of our confidence measurements is the relative
insensitivity of coverage to the counter threshold once
the confidence thresholds levels rise above 2. This rein-
4Coverage
of
loads)
CC1 Comp Go Perl Hydro-2D507090000 111 211 411 422 844
Confidence
of
loads)
CC1 Comp Go
Perl Hydro-2D
Figure
5: Confidence and Coverage for Predictors with Confidence Counters.
-2261014CC1 Comp Go
Hydro-2D Mgrid Su2Cor Tomcatv
SQ-422 SQ844 RE-422 RE-211
Figure
Performance with Varied Predic-
tor/Recovery Configuration.
forces our earlier observation that the memory dependencies
in a program are relatively static - once they
occur a few times, they very often occur in the same
fashion for much of the program execution.
5.3 Pipeline Performance
Predictor hit rates are an insufficient tool for evaluating
the usefulness of a memory dependence predictor.
In order to fully evaluate it, we must integrate it into
a modern processor pipeline, leverage the predictions
it produces, and correctly handle the cases when the
predictor fails. Figure 6 details the performance of the
memory dependence predictor integrated into the base-line
out-of-order issue performance simulator. For each
experiment, the figure shows the speedup (in percent,
measured in cycles to execute the entire program) with
respect to the baseline simulator.
Four experiments are shown for each benchmark in
Figure
6. The first experiment, labeled SQ-422, shows
the speedup found with a dependence predictor utilizing
a 422 confidence configuration and squash recovery
for load mis-speculations. Experiment SQ-844 is the
same experiment except with a 844 confidence mecha-
nism. The RE-422 configuration employs a 422 confidence
configuration and utilizes the re-execution mechanism
described in Section 3 to recover from load mis-
speculations. Finally, the RE-211 configuration also
employs the re-execution recovery mechanism, but utilizes
a lower-confidence 211 confidence configuration.
The configuration with squash recovery and the
422 confidence mechanism, i.e., SQ-422, shows small
speedups for many of the programs, and falls short on
others, such as CC1 which saw a slowdown of more than
5%. A little investigations of these slowdowns quickly
revealed that the high-cost of squash recovery, i.e.,
throwing away all instructions after the mis-speculated
load, often completely outweighs the benefits of memory
renaming. (Many of the programs had more data
mis-speculations than branch mis-predictions!) One
remedy to the high-cost of mis-speculation is to permit
renaming only for higher confidence loads. The
experiment labeled SQ-844 renames higher-confidence
loads. This configuration performs better because it
suffers from less mis-speculation, however, some exper-
iments, e.g., CC1, show very little speedup because
they are still plagued with many high-cost load mis-
speculations.
A better remedy for high mis-speculation recovery
costs is a lower cost mis-speculation recovery mech-
anism. The experiment labeled RE-422 adds re-execution
support to a pipeline with memory renaming
support with a 422 confidence mechanism. This
design has lower mis-speculation costs, allowing it to
to show speedups for all the experiments run, as much
as 14% for M88ksim and an average overall speedup
of over 6%. To confirm our intuitions as to the lower
cost of re-execution, we measured directly the cost of
squash recovery and re-execution for all runs by counting
the number of instructions thrown away due to load
mis-speculations. We found that overall, re-execution
consumes less than 1/3 of the execution bandwidth
required by squash recovery - in other words, less
than 1/3 of the instructions in flight after a load misspeculation
are dependent on the mis-speculated load,
on average. Additionally, re-execution benefits from
not having to re-fetch, decode, and issue instructions
after the mis-speculated load.
Given the lower cost of cost of re-execution, we explored
whether speedups would be improved if we also
renamed lower-confidence loads. The experiment labeled
RE-211 employs re-execution recovery with a
lower-confidence 211 confidence configuration. This
configuration found better performance for most of
the experiments, further supporting the benefits of
re-execution. We also explored the use of yet even
lower-confidence (111) and no-confidence (000) config-
urations, however, mis-speculation rates rise quickly for
these configurations, and performance suffered accordingly
for most experiments.
Figure
7 takes our best-performing configuration,
i.e., RE-211, and varies two key system parameters to
see their effect on the efficacy of memory renaming.
The first experiment, labeled FE/2, cuts the peak instruction
delivery bandwidth of the fetch stage in half.
This configuration can only deliver up to four instructions
from one basic block per cycle. For many of the
experiments, this cuts the average instruction delivery
B/W by nearly half. As shown in the results, the effects
of memory renaming are severely attenuated. With half
of the instruction delivery bandwidth the machine becomes
fetch bottlenecked for many of the experiments.
Once fetch bottlenecked, improving execution performance
with memory renaming does little to improve
the performance of the program. This is especially true
for the integer codes where fetch bandwidth is very limited
due to many small basic blocks.
The second experiment in Figure 7, labeled SF*3, increases
the store forward latency three-fold to three cy-
cles. The store forward latency is the minimumlatency,
in cycles, between any two operations that communicate
a value to each other through memory. In the base-line
experiments of Figure 6, the minimum store forward
latency is one cycle. As shown in the graph, performance
improvements due to renaming rise sharply,
to as much as 41% for M88ksim and more than 16%
overall. This sharp rise is due to the increased latency
for communicationthrough memory - this latency must
CC1 Comp Go
Hydro-2D Mgrid Su2Cor Tomcatv
Figure
7: Program Performance with Varied System
Configuration.
be tolerated, which consumes precious parallelism. Re-named
memory accesses, however, may communication
through the register file in potentially zero cycles (via
bypass), resulting in significantly lower communication
latencies. Given the complexity of load/store queue
dataflow analysis and the requirement that it be performed
in one cycle for one-cycle store forwards (since
addresses in the computation may arrive in the previous
cycle), designers may soon resort to larger load/store
queues with longer latency store forwards. This trend
will make memory renaming more attractive.
A fitting conclusion to our evaluation is to grade
ourselves against the goal set forth in the beginning of
this paper: build a renaming mechanism that maps all
memory communication to the register communication
and synchronization infrastructure. It is through this
hoisting of the memory communication into the registers
that permits more accurate and faster memory
communication. To see how successful we were at this
goal, we measured the breakdown of communication
handled by the load/store queue and the data cache.
Memory communications handled by the load/store
queue are handled "in flight", thus this communication
can benefit from renaming. How did we do? Figure
8 shows for each benchmark, the fraction of references
serviced by the load/store queue in the base configu-
ration, labeled Base, and the fraction of the references
serviced by the load/store queue in the pipeline with
renaming support, labeled RE-422. As shown in the
figure, a significant amount of the communication is
now being handled by the register communication in-
frastructure. Clearly, much of the short-term communication
is able to benefit from the renamer support.
However, a number of the benchmarks, e.g., CC1,
Xlisp and Tomcatv, still have a non-trivial amount of
CC1 Comp Go
Hydro-2D Mgrid Su2Cor Tomcatv
of
all
loads
Base RE-422
Figure
8: Percent of Memory Dependencies Serviced
by Load/Store Queue.
short-term communication that was not identified by
the dependence predictor. For these programs, the
execution benefits from the load/store queues ability
to quickly compute load/store dependencies once addresses
are available. One goal of this work should be
to improve the performance of the dependence predictor
until virtually all short-term communication is captured
in the high-confidence predictions. Not only will
this continue to improve the performance of memory
communication, but once this goal has been attained,
the performance of the load/store queue will become
less important to overall program performance. As a re-
sult, less resources will have to be devoted to load/store
queue design and implementation.
6 Conclusions
In this paper we have described a new mechanism
designed to improve memory performance. This was
accomplished by restructuring the processor pipeline to
incorporate a speculative value and dependence predictor
to enable load instructions to proceed much earlier
in the pipeline. We introduce a prediction confidence
mechanism based on store-load dependence history to
control speculation and a value file containing load and
store data which can be efficiently accessed without
performing complex address calculations. Simulation
results validate this approach to improving memory
performance, showing an average application speedup
of 16%.
We intend to extend this study in a number of ways.
The most obvious extension of this work is to identify
new mechanisms to improve the confidence mechanism
and increase the applicability of this scheme to more
load instructions. To do this we are exploring integrating
control flow information into the confidence mecha-
nism. Another architectural modification is to improve
the efficiency of squashing instructions effected by a
mis-prediction. This is starting to become important
in branch prediction, but becomes more important in
value prediction because of the lower confidence in this
mechanism. Also, the number of instructions that are
directly effected by a misprediction in a load value is
less than for a branch prediction allowing greater benefit
from a improvement in identifying only those instructions
that need to be squashed.
Acknowledgments
Finally, we would like to acknowledge the help of
Haitham Akkary, who offered numerous suggestions
which have greatly improved the quality of this work.
We are also grateful to the Intel Corporation for its
support of this research through the Intel Technology
for Education 2000 grant.
--R
Intel boosts pentium pro to 200 mhz.
Evaluating future microprocessors: The simplescalar tool set.
Optimization of instruction fetch mechanisms for high issue rates.
Reducing memory traffic with cregs.
MIPS RISC Architecture.
Value locality and load value prediction.
Dynamic speculation and synchronization of data dependences.
The performance potential of data dependence speculation and collapsing.
--TR
MIPS RISC architectures
Two-level adaptive training branch prediction
Reducing memory traffic with CRegs
Optimization of instruction fetch mechanisms for high issue rates
Zero-cycle loads
Value locality and load value prediction
The performance potential of data dependence speculation MYAMPERSANDamp; collapsing
Dynamic speculation and synchronization of data dependences
Look-Ahead Processors
--CTR
Adi Yoaz , Mattan Erez , Ronny Ronen , Stephan Jourdan, Speculation techniques for improving load related instruction scheduling, ACM SIGARCH Computer Architecture News, v.27 n.2, p.42-53, May 1999
Daniel Ortega , Eduard Ayguad , Mateo Valero, Dynamic memory instruction bypassing, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Ben-Chung Cheng , Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed early load-address generation, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.138-147, November 1998, Dallas, Texas, United States
George Z. Chrysos , Joel S. Emer, Memory dependence prediction using store sets, ACM SIGARCH Computer Architecture News, v.26 n.3, p.142-153, June 1998
Andreas Moshovos , Gurindar S. Sohi, Speculative Memory Cloaking and Bypassing, International Journal of Parallel Programming, v.27 n.6, p.427-456, 1999
Daniel Ortega , Mateo Valero , Eduard Ayguad, Dynamic memory instruction bypassing, International Journal of Parallel Programming, v.32 n.3, p.199-224, June 2004
Gokhan Memik , Mahmut T. Kandemir , Arindam Mallik, Load elimination for low-power embedded processors, Proceedings of the 15th ACM Great Lakes symposium on VLSI, April 17-19, 2005, Chicago, Illinois, USA
Jinsuo Zhang, The predictability of load address, ACM SIGARCH Computer Architecture News, v.29 n.4, September 2001
Matt T. Yourst , Kanad Ghose, Incremental Commit Groups for Non-Atomic Trace Processing, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.67-80, November 12-16, 2005, Barcelona, Spain
Anastas Misev , Marjan Gusev, Visual simulator for ILP dynamic OOO processor, Proceedings of the 2004 workshop on Computer architecture education: held in conjunction with the 31st International Symposium on Computer Architecture, June 19, 2004, Munich, Germany
Sanjay Jeram Patel , Marius Evers , Yale N. Patt, Improving trace cache effectiveness with branch promotion and trace packing, ACM SIGARCH Computer Architecture News, v.26 n.3, p.262-271, June 1998
Vlad Petric , Anne Bracy , Amir Roth, Three extensions to register integration, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Dean M. Tullsen , John S. Seng, Storageless value prediction using prior register values, ACM SIGARCH Computer Architecture News, v.27 n.2, p.270-279, May 1999
Enric Morancho , Jos Mara Llabera , ngel Oliv, A comparison of two policies for issuing instructions speculatively, Journal of Systems Architecture: the EUROMICRO Journal, v.53 n.4, p.170-183, April, 2007
Jian Huang , David J. Lilja, Extending Value Reuse to Basic Blocks with Compiler Support, IEEE Transactions on Computers, v.49 n.4, p.331-347, April 2000
Andreas Moshovos , Gurindar S. Sohi, Read-after-read memory dependence prediction, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.177-185, November 16-18, 1999, Haifa, Israel
Glenn Reinman , Brad Calder , Dean Tullsen , Gary Tyson , Todd Austin, Classifying load and store instructions for memory renaming, Proceedings of the 13th international conference on Supercomputing, p.399-407, June 20-25, 1999, Rhodes, Greece
Daniel Ortega , Mateo Valero , Eduard Ayguad, A novel renaming mechanism that boosts software prefetching, Proceedings of the 15th international conference on Supercomputing, p.501-510, June 2001, Sorrento, Italy
Jos Gonzlez , Antonio Gonzlez, The potential of data value speculation to boost ILP, Proceedings of the 12th international conference on Supercomputing, p.21-28, July 1998, Melbourne, Australia
Tingting Sha , Milo M. K. Martin , Amir Roth, Scalable Store-Load Forwarding via Store Queue Index Prediction, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.159-170, November 12-16, 2005, Barcelona, Spain
Stephen Jourdan , Ronny Ronen , Michael Bekerman , Bishara Shomar , Adi Yoaz, A novel renaming scheme to exploit value temporal locality through physical register reuse and unification, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.216-225, November 1998, Dallas, Texas, United States
Amir Roth , Andreas Moshovos , Gurindar S. Sohi, Dependence based prefetching for linked data structures, ACM SIGOPS Operating Systems Review, v.32 n.5, p.115-126, Dec. 1998
Gabriel Loh, A time-stamping algorithm for efficient performance estimation of superscalar processors, ACM SIGMETRICS Performance Evaluation Review, v.29 n.1, p.72-81, June 2001
S. Stone , Kevin M. Woley , Matthew I. Frank, Address-Indexed Memory Disambiguation and Store-to-Load Forwarding, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.171-182, November 12-16, 2005, Barcelona, Spain
Peng , Jih-Kwon Peir , Qianrong Ma , Konrad Lai, Address-free memory access based on program syntax correlation of loads and stores, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.11 n.3, p.314-324, June
Tingting Sha , Milo M. K. Martin , Amir Roth, NoSQ: Store-Load Communication without a Store Queue, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.285-296, December 09-13, 2006
Glenn Reinman , Brad Calder, Predictive techniques for aggressive load speculation, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.127-137, November 1998, Dallas, Texas, United States
Craig B. Zilles , Gurindar S. Sohi, Understanding the backward slices of performance degrading instructions, ACM SIGARCH Computer Architecture News, v.28 n.2, p.172-181, May 2000
Pedro Marcuello , Antonio Gonzlez , Jordi Tubella, Speculative multithreaded processors, Proceedings of the 12th international conference on Supercomputing, p.77-84, July 1998, Melbourne, Australia
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, Access region locality for high-bandwidth processor memory system design, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.136-146, November 16-18, 1999, Haifa, Israel
V. Krishna Nandivada , Jens Palsberg, Efficient spill code for SDRAM, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Andreas Moshovos , Gurindar S. Sohi, Reducing Memory Latency via Read-after-Read Memory Dependence Prediction, IEEE Transactions on Computers, v.51 n.3, p.313-326, March 2002
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, Decoupling local variable accesses in a wide-issue superscalar processor, ACM SIGARCH Computer Architecture News, v.27 n.2, p.100-110, May 1999
J. Gonzlez , A. Gonzlez, Control-Flow Speculation through Value Prediction, IEEE Transactions on Computers, v.50 n.12, p.1362-1376, December 2001
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, A High-Bandwidth Memory Pipeline for Wide Issue Processors, IEEE Transactions on Computers, v.50 n.7, p.709-723, July 2001
Lieven Eeckhout , Koen De Bosschere, Quantifying behavioral differences between multimedia and general-purpose workloads, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.6-7, p.199-220, January | address calculation;heap segment;instruction-level parallelism;stores;storage allocation;performance;data fetching;execution time;data value speculation;data dependence speculation;instruction loading;memory renaming;memory references;processor pipeline;memory communication;memory segments;delays;memory access latency;register access techniques;accuracy |
266824 | The predictability of data values. | The predictability of data values is studied at a fundamental level. Two basic predictor models are defined: Computational predictors perform an operation on previous values to yield predicted next values. Examples we study are stride value prediction (which adds a delta to a previous value) and last value prediction (which performs the trivial identity operation on the previous value); Context Based} predictors match recent value history (context) with previous value history and predict values based entirely on previously observed patterns. To understand the potential of value prediction we perform simulations with unbounded prediction tables that are immediately updated using correct data values. Simulations of integer SPEC95 benchmarks show that data values can be highly predictable. Best performance is obtained with context based predictors; overall prediction accuracies are between 56% and 91%. The context based predictor typically has an accuracy about 20% better than the computational predictors (last value and stride). Comparison of context based prediction and stride prediction shows that the higher accuracy of context based prediction is due to relatively few static instructions giving large improvements; this suggests the usefulness of hybrid predictors. Among different instruction types, predictability varies significantly. In general, load and shift instructions are more difficult to predict correctly, whereas add instructions are more predictable. | Introduction
There is a clear trend in high performance processors toward
performing operations speculatively, based on predic-
tions. If predictions are correct, the speculatively executed
instructions usually translate into improved performance.
Although program execution contains a variety of information
that can be predicted, conditional branches have received
the most attention. Predicting conditional branches
provides a way of avoiding control dependences and offers
a clear performance advantage. Even more prevalent
than control dependences, however, are data dependences.
Virtually every instruction depends on the result of some
preceding instruction. As such, data dependences are often
thought to present a fundamental performance barrier.
However, data values may also be predicted, and operations
can be performed speculatively based on these data
predictions.
An important difference between conditional branch
prediction and data value prediction is that data are taken
from a much larger range of values. This would appear to
severely limit the chances of successful prediction. How-
ever, it has been demonstrated recently [1] that data values
exhibit "locality" where values computed by some instructions
tend to repeat a large fraction of the time.
We argue that establishing predictability limits for program
values is important for determining the performance
potential of processors that use value prediction. We believe
that doing so first requires understanding the design
space of value predictors models. Consequently, the goals
of this paper are twofold. Firstly, we discuss some of the
major issues affecting data value prediction and lay down
a framework for studying data value prediction. Secondly,
for important classes of predictors, we use benchmark programs
to establish levels of value predictability. This study
is somewhat idealized: for example, predictor costs are ignored
in order to more clearly understand limits of data
predictability. Furthermore, the ways in which data prediction
can be used in a processor microarchitecture are not
within the scope of this paper, so that we can concentrate
in greater depth on the prediction process, itself.
1.1 Classification of Value Sequences
The predictability of a sequence of values is a function
of both the sequence itself and the predictor used to predict
the sequence. Although it is beyond the scope of this paper
to study the actual sources of predictability, it is useful for
our discussion to provide an informal classification of data
sequences. This classification is useful for understanding
the behavior of predictors in later discussions. The following
classification contains simple value sequences that can
also be composed to form more complex sequences. They
are best defined by giving examples:
28 -13 -99 107 23 456.
Constant sequences are the simplest, and result from
instructions that repeatedly produce the same result. Lipasti
and Shen show that this occurs surprisingly often, and
forms the basis for their work reported in [1]. A stride sequence
has elements that differ by some constant delta. For
the example above, the stride is one, which is probably the
most common case in programs, but other strides are pos-
sible, including negative strides. Constant sequences can
be considered stride sequences with a zero delta. A stride
sequence might appear when a data structure such as an array
is being accessed in a regular fashion; loop induction
variables also have a stride characteristic.
The non-stride category is intended to include all other
sequences that do not belong to the constant or stride cat-
egory. This classification could be further divided, but we
choose not to do so. Non-strides may occur when a sequence
of numbers is being computed and the computation
is more complex than simply adding a constant. Traversing
a linked list would often produce address values that have
a non-stride pattern.
Also very important are sequences formed by composing
stride and non-stride sequences with themselves. Repeating
sequences would typically occur in nested loops
where the inner loop produces either a stride or a non-stride
sequence, and the outer loop causes this sequence to be repeated
Repeated
Repeated 7.
Examination of the above sequences leads naturally to
two types of prediction models that are the subject of discussion
throughout the remainder of this paper:
Computational predictors that make a prediction by computing
some function of previous values. An example of a
computational predictor is a stride predictor. This predictor
adds a stride to the previous value.
Context based predictors learn the value(s) that follow a
particular context - a finite ordered sequence of values - and
predict one of the values when the same context repeats.
This enables the prediction of any repeated sequence, stride
or non-stride.
1.2 Related Work
In [1], it was reported that data values produced by
instructions exhibit "locality" and as a result can be pre-
dicted. The potential for value predictability was reported
in terms of "history depth", that is, how many times a value
produced by an instruction repeats when checked against
the most recent n values. A pronounced difference is observed
between the locality with history depth 1 and history
depth 16. The mechanism proposed for prediction, how-
ever, exploits the locality of history depth 1 and is based on
predicting that the most recent value will also be the next.
In [1], last value prediction was used to predict load values
and in a subsequent work to predict all values produced by
instructions and written to registers [2].
Address prediction has been used mainly for data
prefetching to tolerate long memory latency [3, 4, 5], and
has been proposed for speculative execution of load and
store instructions [6, 7]. Stride prediction for values was
proposed in [8] and its prediction and performance potential
was compared against last value prediction.
Value prediction can draw from a wealth of work on
the prediction of control dependences [9, 10, 11]. The majority
of improvements in the performance of control flow
predictors has been obtained by using correlation. The correlation
information that has been proposed includes local
and global branch history [10], path address history
[11, 12, 13], and path register contents [14]. An interesting
theoretical observation is the resemblance of the predictors
used for control dependence prediction to the prediction
models for text compression [15]. This is an important observation
because it re-enforces the approach used for control
flow prediction and also suggests that compression-like
methods can also be used for data value prediction.
A number of interesting studies report on the importance
of predicting and eliminating data dependences.
Moshovos [16] proposes mechanisms that reduce misspeculation
by predicting when dependences exist between
store and load instructions. The potential of data dependence
elimination using prediction and speculation in combination
with collapsing was examined in [17]. Elimination
of redundant computation is the theme of a number of
software/hardware proposals [18, 19, 20]. These schemes
are similar in that they store in a cache the input and output
parameters of a function and when the same inputs are detected
the output is used without performing the function.
Virtually all proposed schemes perform predictions based
on previous architected state and values. Notable exceptions
to this are the schemes proposed in [6], where it is
predicted that a fetched load instruction has no dependence
and the instruction is executed "early" without dependence
checking, and in [21], where it is predicted that the operation
required to calculate an effective address using two
operands is a logical or instead of a binary addition.
In more theoretical work, Hammerstrom [22] used information
theory to study the information content (en-
tropy) of programs. His study of the information content of
address and instruction streams revealed a high degree of
redundancy. This high degree of redundancy immediately
suggests predictability.
1.3 Paper
Overview
The paper is organized as follows: in Section 2, different
data value predictors are described. Section 3 discusses
the methodology used for data prediction simulations. The
results obtained are presented and analyzed in Section 4.
We conclude with suggestions for future research in Section
5.
2 Data Value Prediction Models
A typical data value predictor takes microarchitecture
state information as input, accesses a table, and produces
a prediction. Subsequently, the table is updated with state
information to help make future predictions. The state information
could consist of register values, PC values, instruction
fields, control bits in various pipeline stages, etc.
The variety and combinations of state information are
almost limitless. Therefore, in this study, we restrict ourselves
to predictors that use only the program counter value
of the instruction being predicted to access the prediction
table(s). The tables are updated using data values produced
by the instruction - possibly modified or combined with
other information already in the table. These restrictions
define a relatively fundamental class of data value predic-
tors. Nevertheless, predictors using other state information
deserve study and could provide a higher level of predictability
than is reported here.
For the remainder of this paper, we further classify
data value predictors into two types - computational and
context-based. We describe each in detail in the next two
subsections.
2.1 Computational Predictors
Computational predictors make predictions by performing
some operation on previous values that an instruction
has generated. We focus on two important members of this
class.
Last Value Predictors perform a trivial computational
operation: the identity function. In its simplest form, if the
most recent value produced by an instruction is v then the
prediction for the next value will also be v. However, there
are a number of variants that modify replacement policies
based on hysteresis. An example of a hysteresis mechanism
is a saturating counter that is associated with each
table entry. The counter is incremented/decremented on
prediction success/failure with the value held in the table
replaced only when the count is below some threshold. Another
hysteresis mechanism does not change its prediction
to a new value until the new value occurs a specific number
of times in succession. A subtle difference between the
two forms of hysteresis is that the former changes to a new
prediction following incorrect behavior (even though that
behavior may be inconsistent), whereas the latter changes
to a new prediction only after it has been consistently observed
Stride Predictors in their simplest form predict the next
value by adding the sum of the most recent value to the
difference of the two most recent values produced by an
instruction. That is if vn\Gamma1 and vn\Gamma2 are the two most
recent values, then the predictor computes
As with the last value predictors, there are important
variations that use hysteresis. In [7] the stride
is only changed if a saturating counter that is incre-
mented/decremented on success/failure of the predictions
is below a certain threshold. This reduces the number of
mispredictions in repeated stride sequences from two per
repeated sequence to one. Another policy, the two-delta
method, was proposed in [6]. In the two-delta method, two
strides are maintained. The one stride (s1) is always up-dated
by the difference between the two most recent val-
ues, whereas the other (s2) is the stride used for computing
the predictions. When stride s1 occurs twice in a row then
it is used to update the prediction stride s2. The two-delta
strategy also reduces mispredictions to one per iteration for
repeated stride sequences and, in addition, only changes
the stride when the same stride occurs twice - instead of
changing the stride following mispredictions.
Other Computational Predictors using more complex
organizations can be conceived. For example, one could
use two different strides, an "inner" one and an "outer"
one - typically corresponding to loop nests - to eliminate
the mispredictions that occur at the beginning of repeating
stride sequences. This thought process illustrates a significant
limitation of computational prediction: the designer
must anticipate the computation to be used. One could
carry this to ridiculous extremes. For example, one could
envision a Fibonacci series predictor, and given a program
that happens to compute a Fibonacci series, the predictor
would do very well.
Going down this path would lead to large hybrid predictors
that combine many special-case computational predictors
with a "chooser" - as has been proposed for conditional
branches in [23, 24]. While hybrid prediction for data values
is in general a good idea, a potential pitfall is that it
may yield an ever-escalating collection of computational
predictors, each of which predicts a diminishing number
of additional values not caught by the others.
In this study, we focus on last value and stride methods
as primary examples of computational predictors. We
also consider hybrid predictors involving these predictors
and the context based predictors to be discussed in the next
section.
2.2 Context Based Predictors
Context based predictors attempt to "learn" values that
follow a particular context - a finite ordered sequence of
previous values - and predict one of the values when the
same context repeats. An important type of context based
predictors is derived from finite context methods used in
text compression [25].
Finite Context Method Predictors (fcm) rely on
mechanisms that predict the next value based on a finite
number of preceding values. An order k fcm predictor
uses k preceding values. Fcms are constructed with counters
that count the occurrences of a particular value immediately
following a certain context (pattern). Thus for
each context there must be, in general, as many counters
as values that are found to follow the context. The predicted
value is the one with the maximum count. Figure 1
shows fcm models of different orders and predictions for
an example sequence.
In an actual implementation where it may be infeasible
to maintain exact value counts, smaller counters may be
used. The use of small counters comes from the area of
text compression. With small counters, when one counter
reaches the maximum count, all counters for the same context
are reset by half. Small counters provide an advantage
if heavier weighting should be given to more recent history
instead of the entire history.
In general, n different fcm predictors of orders 0 to n-
1 can be used for predicting the next value of a sequence,
with the highest order predictor that has a context match
being used to make the prediction. The combination of
more than one prediction model is known as blending [25].
There are a number of variations of blending algorithms,
depending on the information that is updated. Full blending
updates all contexts, and lazy exclusion selects the prediction
with the longer context match and only updates the
counts for the predictions with the longer match or higher.
Other variations of fcm predictors can be devised by
reducing the number of values that are maintained for a
given context. For example, only one value per context
might be maintained along with some update policy. Such
policies can be based on hysteresis-type update policies as
discussed above for last value and stride prediction.
Correlation predictors used for control dependence prediction
strongly resemble context based prediction. As far
as we know, context based prediction has not been considered
for value prediction, though the last value predictor
can be viewed as a 0th order fcm with only one prediction
maintained per context.
2.3 An Initial Analysis
At this point, we briefly analyze and compare the proposed
predictors using the simple pattern sequences shown
in Section 1.1. This analysis highlights important issues as
a
ca b c
a a
a b
a c
b a
c a
c ca b c
a b c0 0
a a a
a a b
a b c
b c ac a a
0th order Model
a b c
Context
Next Symbol
FrequencySequence:a a a b c a a a b c a a a ?
1st order Model 2nd order Model 3rd order Model
Prediction: a
Prediction: a
Prediction: a
Figure
1: Finite Context Models
well as advantages and disadvantages of the predictors to
be studied. As such, they can provide a basis for analyzing
quantitative results given in the following sections.
We informally define two characteristics that are important
for understanding prediction behavior. One is the
Learning Time (LT) which is the number of values that
have to be observed before the first correct prediction. The
second is the Learning Degree (LD) which is the percentage
of correct predictions following the first correct prediction
We quantify these two characteristics for the classes of
sequences given earlier in Section 1.1. For the repeating
sequences, we associate a period (p), the number of values
between repetitions, and frequency, the number of times
a sequence is repeated. We assume repeating sequences
where p is fixed. The frequency measure captures the
finiteness of a repeating sequence. For context predictors,
the order (o) of a predictor influences the learning time.
Table
summarizes how the different predictors perform
for the basic value sequences. Note that the stride
predictor uses hysteresis for updates, so it gets only one incorrect
prediction per iteration through a sequence. A row
of the table with a "-" indicates that the given predictor is
not suitable for the given sequence, i.e., its performance is
very low for that sequence.
As illustrated in the table, last value prediction is only
useful for constant sequences - this is obvious. Stride prediction
does as well as last value prediction for constant
sequences because a constant sequence is essentially zero
stride. The fcm predictors also do very well on constant
sequences, but an order predictor must see a length
sequence before it gets matches in the table (unless some
form of blending is used).
For (non-repeating) stride sequences, only the stride
Prediction Model
Sequence Last Value Stride FCM
Table
1: Behavior of various Prediction Models for Different
Value Sequences
predictor does well; it has a very short learning time and
then achieves a 100% prediction rate. The fcm predictors
cannot predict non-repeating sequences because they rely
on repeating patterns.
For repeating stride sequences, both stride and fcm predictors
do well. The stride predictor has a shorter learning
time, and once it learns, it only gets a misprediction each
time the sequence begins to repeat. On the other hand,
the fcm predictor requires a longer learning time - it must
see the entire sequence before it starts to predict correctly
but once the sequence starts to repeat, it gets 100% accuracy
(Figure 2). This example points out an important
tradeoff between computational and context based predic-
tors. The computational predictor often learns faster - but
the context predictor tends to learn "better" when repeating
sequences occur.
Finally, for repeating non-stride sequences, only the
fcm predictor does well. And the flexibility this provides
is clearly the strong point of fcm predictors. Returning to
our Fibonacci series example - if there is a sequence containing
a repeating portion of the Fibonacci series, then an
fcm predictor will naturally begin predicting it correctly
following the first pass through the sequence.
Of course, in reality, value sequences can be complex
combinations of the simple sequences in Section 1.1, and
a given program can produce about as many different sequences
as instructions are being predicted. Consequently,
in the remainder of the paper, we use simulations to get a
more realistic idea of predictor performance for programs.
3 Simulation Methodology
We adopt an implementation-independent approach for
studying predictability of data dependence values. The reason
for this choice is to remove microarchitecture and other
implementation idiosyncrasies in an effort to develop a basic
understanding of predictability. Hence, these results
can best be viewed as bounds on performance; it will take
additional engineering research to develop realistic implementations
Steady State
Repeats Same Mistake
Repeated
Steady State
Misspredictions
period
Learn
Prediction
CONTEXT BASED
Figure
2: Computational vs Context Based Prediction
We study the predictability of instructions that write results
into general purpose registers (i.e. memory addresses,
stores, jumps and branches are not considered). Prediction
was done with no table aliasing; each static instruction was
given its own table entry. Hence, table sizes are effectively
unbounded. Finally, prediction tables are updated immediately
after a prediction is made, unlike the situation in
practice where it may take many cycles for the actual data
value to be known and available for prediction table updates
We simulate three types of predictors: last value prediction
(l) with an always-update policy (no hysteresis),
stride prediction using the 2-delta method (s2), and a finite
context method (fcm) that maintains exact counts for
each value that follows a particular context and uses the
blending algorithm with lazy exclusion, described in Section
2. Fcm predictors are studied for orders 1, 2 and 3. To
form a context for the fcm predictor we use full concatenation
of history values so there is no aliasing when matching
contexts.
Trace driven simulation was conducted using the Simplescalar
toolset [26] for the integer SPEC95 benchmarks
shown in Table 2 1 . The benchmarks were compiled using
the simplescalar compiler with -O3 optimization. Integer
benchmarks were selected because they tend to have less
data parallelism and may therefore benefit more from data
predictions.
For collecting prediction results, instruction types were
grouped into categories as shown in Table 3. The ab-
1 For ijpeg the simulations used the reference flags with the following
changes: compression.quality 45 and compression.smoothing factor 45.
Benchmark Input Dynamic Instructions
Flags Instr. (mil) Predicted (mil)
compress 30000 e 8.2 5.8 (71%)
gcc gcc.i 203 137 (68%)
ijpeg specmun.ppm 129 108 (84%)
m88k ctl.raw 493 345 (70%)
xlisp 7 queens 202 125 (62%)
Table
2: Benchmarks Characteristics
Instruction Types Code
Addition, Subtraction AddSub
Loads Loads
And, Or, Xor, Nor Logic
Shifts Shift
Compare and Set Set
Multiply and Divide MultDiv
Load immediate Lui
Floating, Jump, Other Other
Table
3: Instruction Categories
breviations shown after each group will be used subsequently
when results are presented. The percentage of predicted
instructions in the different benchmarks ranged between
62%-84%. Recall that some instructions like stores,
branches and jumps are not predicted. A breakdown of the
static count and dynamic percentages of predicted instruction
types is shown in Tables 4-5. The majority of predicted
values are the results of addition and load instructions.
We collected results for each instruction type. However,
we do not discuss results for the other, multdiv and lui instruction
types due to space limitations. In the benchmarks
we studied, the multdiv instructions are not a significant
contributor to dynamic instruction count, and the lui and
"other" instructions rarely generate more than one unique
value and are over 95% predictable by all predictors. We
note that the effect of these three types of instructions is
included in the calculations for the overall results.
For averaging we used arithmetic mean, so each benchmark
effectively contributes the same number of total predictions
4 Simulation Results
4.1 Predictability
Figure
3 shows the overall predictability for the selected
benchmarks, and Figures 4-7 show results for the important
instruction types. From the figures we can draw a number
Type com gcc go ijpe m88k perl xlis
Loads 686 29138 9929 3645 2215 3855 1432
Logic 149 2600 215 278 674 460 157
MultDi 19 313 196 222 77 26 25
Other 108 5848 1403 517 482 778 455
Table
4: Predicted Instructions - Static Count
Type com gcc go ijpe m88k perl xlis
Loads 20.5 38.6 26.2 21.4 24.8 43.1 48.6
Logic 3.1 3.1 0.5 1.9 5.0 3.1 3.4
Shift 17.4 7.7 13.3 16.4 3.2 8.2 3.2
Set 7.4 5.4 4.9 4.2 15.2 5.6 3.2
Lui 3.3 3.7 11.4 0.2 6.9 2.4 0.8
Other 5.7 2.1 1.3 0.3 2.1 3.3 4.8
Table
5: Predicted Instructions - Dynamic(%)
of conclusions. Overall, last value prediction is less accurate
than stride prediction, and stride prediction is less
accurate than fcm prediction. Last value prediction varies
in accuracy from about 23% to 61% with an average of
about 40%. This is in agreement with the results obtained
in [2]. Stride prediction provides accuracy of between 38%
and 80% with an average of about 56%. Fcm predictors of
orders 1, 2, and 3 all perform better than stride prediction;
and the higher the order, the higher the accuracy. The order
3 predictor is best and gives accuracies of between 56%
and over 90% with an average of 78%. For the three fcm
predictors studied, improvements diminish as the order is
increased. In particular, we observe that for every additional
value in the context the performance gain is halved.
The effect on predictability with increasing order is examined
in more detail in Section 4.4. Performance of the
stride and last value predictors varies significantly across
different instruction types for the same benchmark. The
performance of the fcm predictors varies less significantly
across different instruction types for the same benchmark.
This reflects the flexibility of the fcm predictors - they perform
well for any repeating sequence, not just strides.
In general both stride and fcm prediction appear to have
higher predictability for add/subtracts than loads. Logical
instructions also appear to be very predictable especially
by the fcm predictors. Shift instructions appear to be the
most difficult to predict.
Stride prediction does particularly well for add/subtract
compress cc1 go ijpeg m88k perl xlisp
%of
Predictions
Figure
3: Prediction Success for All Instructions
instructions. But for non-add/subtract instructions the performance
of the stride predictor is close to last value pre-
diction. This indicates that when the operation of a computational
predictor matches the operation of the instruction
(e.g. addition), higher predictability can be expected. This
suggests new computational predictors that better capture
the functionality of non-add/subtract instructions could be
useful. For example, for shifts a computational predictor
might shift the last value according to the last shift distance
to arrive at a prediction. This approach would tend to lead
to hybrid predictors, however, with a separate component
predictor for each instruction type.
4.2 Correlation of Correctly Predicted Sets
In effect, the results in the previous section essentially
compare the sizes of the sets of correctly predicted values.
It is also interesting to consider relationships among the
specific sets of correctly predicted values. Primarily, these
relationships suggest ways that hybrid predictors might be
constructed - although the actual construction of hybrid
predictors is beyond the scope of this paper.
The predicted set relationships are shown in Figure 8.
Three predictors are used: last value, stride (delta-2), and
fcm (order 3). All subsets of predictors are represented.
Specifically: l is the fraction of predictions for which only
the last value predictor is correct; s and f are similarly defined
for the stride and fcm predictors respectively; ls is the
fraction of predictions for which both the last value and the
stride predictors are correct but the fcm predictor is not; lf
and sf are similarly defined; lsf is the fraction of predictions
for which all predictors are correct; and np is the fraction
for which none of the predictors is correct.
In the figure results are averaged over all benchmarks,
but the qualitative conclusions are similar for each of the1030507090compress cc1 go ijpeg m88k perl xlisp
%of
Predictions
Figure
4: Prediction Success for Add/Subtract Instructions1030507090compress cc1 go ijpeg m88k perl xlisp
%of
Predictions
Figure
5: Prediction Success for Loads Instructions1030507090compress cc1 go ijpeg m88k perl xlisp
%of
Predictions
(Logic)
Figure
Prediction Success for Logic Instructions1030507090compress cc1 go ijpeg m88k perl xlisp
%of
Predictions
Figure
7: Prediction Success for Shift Instructions
All AddSu Loads Logic Shift Set
Predictions
s
ls
lf
sf
lsf
Figure
8: Contribution of different Predictors
individual benchmarks. Overall, Figure 8 can be briefly
summarized:
small number, close to 18%, of values are not predicted
correctly by any model.
ffl A large portion, around 40%, of correct predictions is
captured by all predictors.
ffl A significant fraction, over 20%, of correct predictions
is only captured by fcm.
ffl Stride and last value prediction capture less than 5%
of the correct predictions that fcm misses.
The above confirms that data values are very pre-
dictable. And it appears that context based prediction is
necessary for achieving the highest levels of predictabil-
ity. However, almost 60% of the correct predictions are
also captured by the stride predictor. Assuming that context
based prediction is the more expensive approach, this
suggest that a hybrid scheme might be useful for enabling
high prediction accuracies at lower cost. That is, one
should try to use a stride predictor for most predictions,
and use fcm prediction to get the remaining 20%.
Another conclusion is that last value prediction adds
very little to what the other predictors achieve. So, if either
stride or fcm prediction is implemented, there is no
point in adding last value prediction to a hybrid predictor.
The important classes of load and add instructions yield
results similar to the overall average. Finally, we note that
for non-add/subtract instructions the contribution of stride
prediction is smaller, this is likely due to the earlier observation
that stride prediction does not match the func-20406080100
% of Static Instructions that FCM does better than Stride
Normalized
AddSub
Loads
Logic
Figure
9: Cumulative Improvement of FCM over Stride
tionality of other instruction types. This suggests a hybrid
predictor based on instruction types.
Proceeding along the path of a hybrid fcm-stride pre-
dictor, one reasonable approach would be to choose among
the two component predictors via the PC address of the instruction
being predicted. This would appear to work well
if the performance advantage of the fcm predictor is due to
a relatively small number of static instructions.
To determine if this is true, we first constructed a list
of static instructions for which the fcm predictor gives better
performance. For each of these static instructions, we
determined the difference in prediction accuracy between
fcm and stride. We then sorted the static instructions in
descending order of improvement. Then, in Figure 9 we
graph the cumulative fraction of the total improvement versus
the accumulated percentage of static instructions. The
graph shows that overall, about 20% of the static instructions
account for about 97% of the total improvement of
fcm over stride prediction. For most of individual instruction
types, the result is similar, with shifts showing slightly
worse performance.
The results do suggest that improvements due to context
based prediction are mainly due to a relatively small
fraction of static instructions. Hence, a hybrid fcm-stride
predictor with choosing seems to be a good approach.
4.3 Value Characteristics
At this point, it is clear that context based predictors
perform well, but may require large tables that store history
values. We assume unbounded tables in our study,
but when real implementations are considered, of course
this will not be possible. To get a handle on this issue, we
study the value characteristics of instructions. In particu-
Predicted
Instructions >65536163841024644
Figure
10: Values and Instruction Behavior
lar we report on the number of unique values generated
by predicted instructions. The overall numbers of different
values could give a rough indication of the numbers of
values that might have to be stored in a table.
In the left half of Figure 10, we show the number different
values produced by percentages of static instructions
(an s prefix). In the right half, we determine the fractions
of dynamic instructions (a d prefix) that correspond to each
of the static categories. From the figure, we observe:
ffl A large number, 50%, of static instructions generate
only one value.
ffl The majority of static instructions, 90%, generate
fewer than 64 values.
ffl The majority, 50%, of dynamic instructions correspond
to static instructions that generate fewer than
values.
ffl Over 90% of the dynamic instructions are due to static
instructions that generate at most 4096 unique values.
ffl The number of values generated varies among instruction
types. In general add/subtract and load instructions
generate more values as compared with logic
and shift operations.
ffl The more frequently an instruction executes the more
values it generates.
The above suggest that a relatively small number of values
would be required to predict correctly the majority of
dynamic instructions using context based prediction - a
positive result.
From looking at individual benchmark results (not
shown) there appears to be a positive correlation between
programs that are more difficult to predict and the programs
that produce more values. For example, the highly
predictable m88ksim has many more instructions that produce
few values as compared with the less predictable gcc
and go. This would appear to be an intuitive result, but
there may be cases where it does not hold; for example if
values are generated in a fashion that is predictable with
computational predictors or if a small number of values
occur in many different sequences.
4.4 Sensitivity Experiments for Context Based
Prediction
In this section we discuss the results of experiments that
illustrate the sensitivity of fcm predictors to input data and
predictor order. For these experiments, we focus on the gcc
benchmark and report average correct predictions among
all instruction types.
Sensitivity to input data: We studied the effects of different
input files and flags on correct prediction. The fcm predictor
used in these experiments was order 2. The prediction
accuracy and the number of predicted instructions for
the different input files is shown in Table 6. The fraction of
correct predictions shows only small variations across the
different input files. We note that these results are for unbounded
tables, so aliasing affects caused by different data
set sizes will not appear. This may not be the case with
fixed table sizes.
In
Table
7 we show the predictability for gcc for the same
input file, but with different compilation flags, again using
an order 2 fcm predictor. The results again indicate that
variations are very small.
Sensitivity to the order: experiments were performed for
increasing order for the same input file (gcc.i) and flags.
The results for the different orders are shown in Figure
11. The experiment suggests that higher order means better
performance but returns are diminishing with increasing
order. The above also indicate that few previous values are
required to predict well.
Conclusions
We considered representatives from two classes of prediction
models: (i) computational and (ii) context based.
Simulations demonstrate that values are potentially highly
predictable. Our results indicate that context based prediction
outperforms previously proposed computational prediction
(stride and last value) and that if high prediction
correctness is desired context methods probably need to be
used either alone or in a hybrid scheme. The obtained results
also indicate that the performance of computational
prediction varies between instruction types indicating that
File Predictions (mil) Correct (%)
recog.i 192 78.6
stmt.i 372 77.8
Table
of 126.gcc to Different Input Files
Flags Predictions (mil) Correct (%)
none
ref flags 137 77.1
Table
7: Sensitivity of 126.gcc to Input Flags with input
file gcc.i72768084
Order
Prediction
Accuracy
Figure
11: Sensitivity of 126.gcc to the Order with input
file gcc.i
its performance can be further improved if the prediction
function matches the functionality of the predicted instruc-
tion. Analysis of the improvements of context prediction
over computational prediction suggest that about 20% of
the instructions that generate relatively few values are responsible
for the majority of the improvement. With respect
to the value characteristics of instructions, we observe
that the majority of instructions do not generate many
unique values. The number of values generated by instructions
varies among instructions types. This result suggests
that different instruction types need to be studied separately
due to the distinct predictability and value behavior.
We believe that value prediction has significant potential
for performance improvement. However, a lot of innovative
research is needed for value prediction to become an
effective performance approach.
6
Acknowledgements
This work was supported in part by NSF Grants MIP-
9505853 and MIP-9307830 and by the U.S. Army Intelligence
Center and Fort Huachuca under Contract DABT63-
95-C-0127 and ARPA order no. D346. The views and
conclusions contained herein are those of the authors and
should not be interpreted as necessarily representing the
official policies or endorsements, either expressed or im-
plied, of the U. S. Army Intelligence Center and Fort
Huachuca, or the U.S. Government.
The authors would like to thank Stamatis Vassiliadis for
his helpful suggestions and constructive critique while this
work was in progress.
--R
"Value locality and data speculation,"
"Exceeding the dataflow limit via value prediction,"
"Effective hardware-based data prefetching for high performance processors,"
"Examination of a memory access classification scheme for pointer intensive and numeric programs,"
"Prefetching using markov pre- dictors,"
"A load instruction unit for pipelined processors,"
"Speculative execution via address prediction and data prefetching,"
"Speculative execution based on value prediction,"
"A study of branch prediction strategies,"
"Alternative implementations of two-level adaptive branch prediction,"
"Target prediction for indirect jumps,"
"Improving the accuracy of static branch prediction using branch correlation,"
"Dynamic path-based branch correlation,"
"Compiler synthesized dynamic branch prediction,"
"Analysis of branch prediction via data compression,"
"Dynamic speculation and synchronization of data depen- dences,"
"The performance potential of data dependence speculation & collaps- ing,"
"An architectural alternative to optimizing compilers,"
"Caching function results: Faster arithmetic by avoiding unnecessary computation,"
"Dynamic instruction reuse,"
"Zero-cycle loads: Microarchitecture support for reducing load latency,"
"Information content of cpu memory referencing behavior,"
"Combining branch predictors,"
"Using hybrid branch predictors to improve branch prediciton in the presence of context switches,"
"Evaluating future microprocessors: The simplescalar tool set,"
--TR
Text compression
Alternative implementations of two-level adaptive branch prediction
Improving the accuracy of static branch prediction using branch correlation
Dynamic path-based branch correlation
Zero-cycle loads
Using hybrid branch predictors to improve branch prediction accuracy in the presence of context switches
Analysis of branch prediction via data compression
Value locality and load value prediction
Examination of a memory access classification scheme for pointer-intensive and numeric programs
Compiler synthesized dynamic branch prediction
Exceeding the dataflow limit via value prediction
The performance potential of data dependence speculation MYAMPERSANDamp; collapsing
Speculative execution via address prediction and data prefetching
Dynamic speculation and synchronization of data dependences
Dynamic instruction reuse
Prefetching using Markov predictors
Target prediction for indirect jumps
Effective Hardware-Based Data Prefetching for High-Performance Processors
An architectural alternative to optimizing compilers
A study of branch prediction strategies
Information content of CPU memory referencing behavior
--CTR
G. Surendra , S. Banerjee , S. K. Nandy, On the effectiveness of flow aggregation in improving instruction reuse in network processing applications, International Journal of Parallel Programming, v.31 n.6, p.469-487, December
Ehsan Atoofian , Amirali Baniasadi, Speculative trivialization point advancing in high-performance processors, Journal of Systems Architecture: the EUROMICRO Journal, v.53 n.9, p.587-601, September, 2007
Chao-Ying Fu , Matthew D. Jennings , Sergei Y. Larin , Thomas M. Conte, Value speculation scheduling for high performance processors, ACM SIGOPS Operating Systems Review, v.32 n.5, p.262-271, Dec. 1998
Martin Burtscher, An improved index function for (D)FCM predictors, ACM SIGARCH Computer Architecture News, v.30 n.3, June 2002
Po-Jen Chuang , Young-Tzong Hsiao , Yu-Shian Chiu, An Efficient Value Predictor Dynamically Using Loop and Locality Properties, The Journal of Supercomputing, v.30 n.1, p.19-36, October 2004
Avinash Sodani , Gurindar S. Sohi, Understanding the differences between value prediction and instruction reuse, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.205-215, November 1998, Dallas, Texas, United States
Sang-Jeong Lee , Pen-Chung Yew, On Table Bandwidth and Its Update Delay for Value Prediction on Wide-Issue ILP Processors, IEEE Transactions on Computers, v.50 n.8, p.847-852, August 2001
Chao-ying Fu , Jill T. Bodine , Thomas M. Conte, Modeling Value Speculation: An Optimal Edge Selection Problem, IEEE Transactions on Computers, v.52 n.3, p.277-292, March
Michael Bekerman , Stephan Jourdan , Ronny Ronen , Gilad Kirshenboim , Lihu Rappoport , Adi Yoaz , Uri Weiser, Correlated load-address predictors, ACM SIGARCH Computer Architecture News, v.27 n.2, p.54-63, May 1999
Avinash Sodani , Gurindar S. Sohi, An empirical analysis of instruction repetition, ACM SIGOPS Operating Systems Review, v.32 n.5, p.35-45, Dec. 1998
Jinsuo Zhang, The predictability of load address, ACM SIGARCH Computer Architecture News, v.29 n.4, September 2001
Yuan Chou , Brian Fahs , Santosh Abraham, Microarchitecture Optimizations for Exploiting Memory-Level Parallelism, ACM SIGARCH Computer Architecture News, v.32 n.2, p.76, March 2004
Jian Huang , David J. Lilja, Extending Value Reuse to Basic Blocks with Compiler Support, IEEE Transactions on Computers, v.49 n.4, p.331-347, April 2000
Mark Oskin , Frederic T. Chong , Matthew Farrens, HLS: combining statistical and symbolic simulation to guide microprocessor designs, ACM SIGARCH Computer Architecture News, v.28 n.2, p.71-82, May 2000
Dean M. Tullsen , John S. Seng, Storageless value prediction using prior register values, ACM SIGARCH Computer Architecture News, v.27 n.2, p.270-279, May 1999
Tarun Nakra , Rajiv Gupta , Mary Lou Soffa, Value prediction in VLIW machines, ACM SIGARCH Computer Architecture News, v.27 n.2, p.258-269, May 1999
Yiannakis Sazeides , James E. Smith, Modeling program predictability, ACM SIGARCH Computer Architecture News, v.26 n.3, p.73-84, June 1998
Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed dynamic computation reuse: rationale and initial results, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.158-169, November 16-18, 1999, Haifa, Israel
Daniel A. Connors , Hillery C. Hunter , Ben-Chung Cheng , Wen-Mei W. Hwu, Hardware support for dynamic activation of compiler-directed computation reuse, ACM SIGPLAN Notices, v.35 n.11, p.222-233, Nov. 2000
Characterization of value locality in Java programs, Workload characterization of emerging computer applications, Kluwer Academic Publishers, Norwell, MA, 2001
M. Burrows , U. Erlingson , S.-T. A. Leung , M. T. Vandevoorde , C. A. Waldspurger , K. Walker , W. E. Weihl, Efficient and flexible value sampling, ACM SIGPLAN Notices, v.35 n.11, p.160-167, Nov. 2000
Jos Gonzlez , Antonio Gonzlez, The potential of data value speculation to boost ILP, Proceedings of the 12th international conference on Supercomputing, p.21-28, July 1998, Melbourne, Australia
M. Burrows , U. Erlingson , S-T. A. Leung , M. T. Vandevoorde , C. A. Waldspurger , K. Walker , W. E. Weihl, Efficient and flexible value sampling, ACM SIGOPS Operating Systems Review, v.34 n.5, p.160-167, Dec. 2000
Daniel A. Connors , Hillery C. Hunter , Ben-Chung Cheng , Wen-mei W. Hwu, Hardware support for dynamic activation of compiler-directed computation reuse, ACM SIGOPS Operating Systems Review, v.34 n.5, p.222-233, Dec. 2000
Martin Burtscher, TCgen 2.0: a tool to automatically generate lossless trace compressors, ACM SIGARCH Computer Architecture News, v.34 n.3, p.1-8, June 2006
Juan M. Cebrian , Juan L. Aragon , Jose M Garcia , Stefanos Kaxiras, Adaptive VP decay: making value predictors leakage-efficient designs for high performance processors, Proceedings of the 4th international conference on Computing frontiers, May 07-09, 2007, Ischia, Italy
Robert S. Chappell , Francis Tseng , Adi Yoaz , Yale N. Patt, Difficult-path branch prediction using subordinate microthreads, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Chia-Hung Liao , Jong-Jiann Shieh, Exploiting speculative value reuse using value prediction, Australian Computer Science Communications, v.24 n.3, p.101-108, January-February 2002
G. Surendra , Subhasis Banerjee , S. K. Nandy, On the effectiveness of prefetching and reuse in reducing L1 data cache traffic: a case study of Snort, Proceedings of the 3rd workshop on Memory performance issues: in conjunction with the 31st international symposium on computer architecture, p.88-95, June 20-20, 2004, Munich, Germany
Glenn Reinman , Brad Calder , Dean Tullsen , Gary Tyson , Todd Austin, Classifying load and store instructions for memory renaming, Proceedings of the 13th international conference on Supercomputing, p.399-407, June 20-25, 1999, Rhodes, Greece
Andreas Moshovos , Gurindar S. Sohi, Read-after-read memory dependence prediction, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.177-185, November 16-18, 1999, Haifa, Israel
Compiler controlled value prediction using branch predictor based confidence, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.327-336, December 2000, Monterey, California, United States
Zhang , Rajiv Gupta, Whole Execution Traces, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.105-116, December 04-08, 2004, Portland, Oregon
Jeremy Singer , Chris Kirkham, Dynamic analysis of program concepts in Java, Proceedings of the 4th international symposium on Principles and practice of programming in Java, August 30-September 01, 2006, Mannheim, Germany
Trace processors, Proceedings of the 30th annual ACM/IEEE international symposium on Microarchitecture, p.138-148, December 01-03, 1997, Research Triangle Park, North Carolina, United States
Bryan Black , Brian Mueller , Stephanie Postal , Ryan Rakvic , Noppanunt Utamaphethai , John Paul Shen, Load execution latency reduction, Proceedings of the 12th international conference on Supercomputing, p.29-36, July 1998, Melbourne, Australia
Ilya Ganusov , Martin Burtscher, Efficient emulation of hardware prefetchers via event-driven helper threading, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
Pedro Marcuello , Antonio Gonzlez , Jordi Tubella, Speculative multithreaded processors, Proceedings of the 12th international conference on Supercomputing, p.77-84, July 1998, Melbourne, Australia
Nana B. Sam , Martin Burtscher, On the energy-efficiency of speculative hardware, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Byung-Kwon Chung , Jinsuo Zhang , Jih-Kwon Peir , Shih-Chang Lai , Konrad Lai, Direct load: dependence-linked dataflow resolution of load address and cache coordinate, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Timothy H. Heil , Zak Smith , J. E. Smith, Improving branch predictors by correlating on data values, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.28-37, November 16-18, 1999, Haifa, Israel
Youfeng Wu , Dong-Yuan Chen , Jesse Fang, Better exploration of region-level value locality with integrated computation reuse and value prediction, ACM SIGARCH Computer Architecture News, v.29 n.2, p.98-108, May 2001
Freddy Gabbay , Avi Mendelson, The effect of instruction fetch bandwidth on value prediction, ACM SIGARCH Computer Architecture News, v.26 n.3, p.272-281, June 1998
Parthasarathy Ranganathan , Sarita Adve , Norman P. Jouppi, Reconfigurable caches and their application to media processing, ACM SIGARCH Computer Architecture News, v.28 n.2, p.214-224, May 2000
Huiyang Zhou , Thomas M. Conte, Enhancing memory level parallelism via recovery-free value prediction, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Peng , Jih-Kwon Peir , Qianrong Ma , Konrad Lai, Address-free memory access based on program syntax correlation of loads and stores, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.11 n.3, p.314-324, June
Matthew C. Chidester , Alan D. George , Matthew A. Radlinski, Multiple-path execution for chip multiprocessors, Journal of Systems Architecture: the EUROMICRO Journal, v.49 n.1-2, p.33-52, July
Chi-Hung Chi , Jun-Li Yuan , Chin-Ming Cheung, Cyclic dependence based data reference prediction, Proceedings of the 13th international conference on Supercomputing, p.127-134, June 20-25, 1999, Rhodes, Greece
Nana B. Sam , Martin Burtscher, Improving memory system performance with energy-efficient value speculation, ACM SIGARCH Computer Architecture News, v.33 n.4, November 2005
Madhu Mutyam , Vijaykrishnan Narayanan, Working with process variation aware caches, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Tanaus Ramrez , Alex Pajuelo , Oliverio J. Santana , Mateo Valero, Kilo-instruction processors, runahead and prefetching, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Luis Ceze , Karin Strauss , James Tuck , Josep Torrellas , Jose Renau, CAVA: Using checkpoint-assisted value prediction to hide L2 misses, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.2, p.182-208, June 2006
Glenn Reinman , Brad Calder, Predictive techniques for aggressive load speculation, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.127-137, November 1998, Dallas, Texas, United States
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, Access region locality for high-bandwidth processor memory system design, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.136-146, November 16-18, 1999, Haifa, Israel
Ravi Bhargava , Lizy K. John, Latency and energy aware value prediction for high-frequency processors, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA
Onur Mutlu , Hyesoon Kim , Yale N. Patt, Address-Value Delta (AVD) Prediction: Increasing the Effectiveness of Runahead Execution by Exploiting Regular Memory Allocation Patterns, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.233-244, November 12-16, 2005, Barcelona, Spain
Huiyang Zhou , Thomas M. Conte, Enhancing Memory-Level Parallelism via Recovery-Free Value Prediction, IEEE Transactions on Computers, v.54 n.7, p.897-912, July 2005
J. Gonzlez , A. Gonzlez, Control-Flow Speculation through Value Prediction, IEEE Transactions on Computers, v.50 n.12, p.1362-1376, December 2001
Martin Burtscher , Benjamin G. Zorn, Hybrid Load-Value Predictors, IEEE Transactions on Computers, v.51 n.7, p.759-774, July 2002
Huiyang Zhou , Jill Flanagan , Thomas M. Conte, Detecting global stride locality in value streams, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Ilya Ganusov , Martin Burtscher, Future execution: A prefetching mechanism that uses multiple cores to speed up single threads, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.4, p.424-449, December 2006
Afrin Naz , Krishna Kavi , JungHwan Oh , Pierfrancesco Foglia, Reconfigurable split data caches: a novel scheme for embedded systems, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Brad Calder , Glenn Reinman , Dean M. Tullsen, Selective value prediction, ACM SIGARCH Computer Architecture News, v.27 n.2, p.64-74, May 1999
Pedro Marcuello , Antonio Gonzlez, Clustered speculative multithreaded processors, Proceedings of the 13th international conference on Supercomputing, p.365-372, June 20-25, 1999, Rhodes, Greece
Andreas Moshovos , Gurindar S. Sohi, Reducing Memory Latency via Read-after-Read Memory Dependence Prediction, IEEE Transactions on Computers, v.51 n.3, p.313-326, March 2002
Martin Burtscher , Amer Diwan , Matthias Hauswirth, Static load classification for improving the value predictability of data-cache misses, ACM SIGPLAN Notices, v.37 n.5, May 2002
Jian Huang , David J. Lilja, Balancing Reuse Opportunities and Performance Gains with Subblock Value Reuse, IEEE Transactions on Computers, v.52 n.8, p.1032-1050, August
Smruti R. Sarangi , Wei Liu, Josep Torrellas , Yuanyuan Zhou, ReSlice: Selective Re-Execution of Long-Retired Misspeculated Instructions Using Forward Slicing, Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture, p.257-270, November 12-16, 2005, Barcelona, Spain
Timothy Sherwood , Suleyman Sair , Brad Calder, Predictor-directed stream buffers, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.42-53, December 2000, Monterey, California, United States
Daehyun Kim , Mainak Chaudhuri , Mark Heinrich, Leveraging cache coherence in active memory systems, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA
Zhang , Rajiv Gupta, Whole execution traces and their applications, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.3, p.301-334, September 2005
Sangyeun Cho , Pen-Chung Yew , Gyungho Lee, A High-Bandwidth Memory Pipeline for Wide Issue Processors, IEEE Transactions on Computers, v.50 n.7, p.709-723, July 2001
Sang-Jeong Lee , Pen-Chung Yew, On Augmenting Trace Cache for High-Bandwidth Value Prediction, IEEE Transactions on Computers, v.51 n.9, p.1074-1088, September 2002
Lucian Codrescu , D. Scott Wills , James Meindl, Architecture of the Atlas Chip-Multiprocessor: Dynamically Parallelizing Irregular Applications, IEEE Transactions on Computers, v.50 n.1, p.67-82, January 2001
Martin Burtscher, VPC3: a fast and effective trace-compression algorithm, ACM SIGMETRICS Performance Evaluation Review, v.32 n.1, June 2004
Glenn Reinman , Brad Calder , Todd Austin, Optimizations Enabled by a Decoupled Front-End Architecture, IEEE Transactions on Computers, v.50 n.4, p.338-355, April 2001
Yiannakis Sazeides , James E. Smith, Limits of Data Value Predictability, International Journal of Parallel Programming, v.27 n.4, p.229-256, Aug. 1999
Martin Burtscher , Nana B. Sam, Automatic Generation of High-Performance Trace Compressors, Proceedings of the international symposium on Code generation and optimization, p.229-240, March 20-23, 2005
Suleyman Sair , Timothy Sherwood , Brad Calder, A Decoupled Predictor-Directed Stream Prefetching Architecture, IEEE Transactions on Computers, v.52 n.3, p.260-276, March
S. Subramanya Sastry , Rastislav Bodk , James E. Smith, Rapid profiling via stratified sampling, ACM SIGARCH Computer Architecture News, v.29 n.2, p.278-289, May 2001
Martin Burtscher , Ilya Ganusov , Sandra J. Jackson , Jian Ke , Paruj Ratanaworabhan , Nana B. Sam, The VPC Trace-Compression Algorithms, IEEE Transactions on Computers, v.54 n.11, p.1329-1344, November 2005 | Context Based Prediction;Last Value Prediction;stride prediction;prediction;value prediction |
266825 | Value profiling. | variables as invariant or constant at compile-time allows the compiler to perform optimizations including constant folding, code specialization, and partial evaluation. Some variables, which cannot be labeled as constants, may exhibit semi-invariant behavior. A "semi-invariant" variable is one that cannot be identified as a constant at compile-time, but has a high degree of invariant behavior at run-time. If run-time information was available to identify these variables as semi-invariant, they could then benefit from invariant-based compiler optimizations. In this paper we examine the invariance found from profiling instruction values, and show that many instructions have semi-invariant values even across different inputs. We also investigate the ability to estimate the invariance for all instructions in a program from only profiling load instructions. In addition, we propose a new type of profiling called "Convergent Profiling". Estimating the invariance from loads and convergent profiling are used to reduce the profiling time needed to generate an accurate value profile. The value profile can then be used to automatically guide code generation for dynamic compilation, adaptive execution, code specialization, partial evaluation and other compiler optimizations. | Introduction
Many compiler optimization techniques depend upon
analysis to determine which variables have invariant be-
havior. Variables which have invariant run-time behav-
ior, but cannot be labeled as such at compile-time, do not
fully benefit from these optimizations. This paper examines
using profile feedback information to identify which
variables have semi-invariant behavior. A semi-invariant
variable is one that cannot be identified as a constant at
compile-time, but has a high degree of invariant behavior
at run-time. This occurs when a variable has one to N
(where N is small) possible values which account for most
of the variable's values at run-time. Value profiling is an
approach that can identify these semi-invariant variables.
The goal of value profiling is different from value pre-
diction. Value prediction is used to predict the next result
value (write of a register) for an instruction. This has been
shown to provide predictable results by using previously
cached values to predict the next value of the variable using
a hardware buffer [5, 9, 10]. This approach was shown
to work well for a hardware value predictor, since values
produced by an instruction have a high degree of temporal
locality.
Our research into the semi-invariance of variables is
different from these previous hardware predication stud-
ies. For compiler optimizations, we are more concerned
with the invariance of a variable, the top N values of the
variable, or a popular range of values for the variable over
the life-time of the program, although the temporal relationship
between values can provide useful information.
The value profiling techniques presented in this paper keep
track of the top N values for an instruction and the number
of occurrences for each of those values. This information
can then be used to automatically guide compilation and
optimization.
In the next section, we examine motivation for this paper
and related work. Section 3 describes a method for
value profiling. Section 4 describes the methodology used
to gather the results for this paper. Section 5 examines the
semi-invariant behavior of all instruction types, parame-
ters, and loads, and shows that there is a high degree of
invariance for several types of instructions. In order to reduce
the time to generate a value profile for optimization,
x6 investigates the ability to estimate the invariance for all
non-load instructions by value profiling only load instructions
and propagating their invariance. Section 7 examines
a new type of profiler called the Convergent Profiler and
its use for value profiling. The goal of a convergent profiler
is to reduce the amount of time it takes to gather detailed
profile information. For value profiling, we found
that the data being profiled, the invariance of instructions,
often reaches a steady state, and at that point profiling can
be turned off or sampled less often. This reduces the profiling
time, while still creating an accurate value profile. We
conclude by summarizing the paper in x8.
Motivation and Related Work
This paper was originally motivated by a result we found
when examining the input values for long latency instruc-
tions. A divide on a DEC Alpha 21064 processor can take
cycles to execute, and a divide on the Intel Pentium processor
can take up to 46 cycles. Therefore, it would be
beneficial to special case divide instructions with optimizable
numerators or denominators. In profiling hydro2d
from the SPEC92 benchmark suite, we found that 64% of
the executed divide instructions had either a 0 for its numerator
or a 1 for its denominator. In conditioning these
divide instructions on the numerator or denominator, with
either 0 or 1 based on profiling information, we were able
to reduce the execution time of hydro2d by 15% running
on a DEC Alpha 21064 processor. In applying the same
optimization to a handful of video games (e.g., Fury3 and
Pitfall) on the Intel Pentium processor, we were able to reduce
the number of cycles executed by an estimated 5% 1
for each of these programs. These results show that value
profiling can be very effective for reducing the execution
time of long latency instructions.
The recent publications on Value Prediction in hardware
provided further motivation for our research into
value profiling [5, 9, 10]. The recent paper by Lipasti
et al. [9] showed that on average 49% of the instructions
wrote the same value as they did the last time, and 61% of
the executed instructions produced the same value as one
of the last 4 values produced by that instruction using a
16K value prediction table. These results show that there
is a high degree of temporal locality in the values produced
by instructions, but this does not necessarily equal the in-
struction's degree of invariance, which is needed for certain
compiler optimizations.
2.1 Uses for Value Profiling
Value profiling can benefit several areas of research including
dynamic compilation and adaptive execution, performing
compiler optimizations to specialize a program for certain
values, and providing hints for value prediction hardware
This estimation is a static calculation using a detailed pipeline architecture
model of the Pentium processor. The estimation takes into consideration data dependent
and resource conflict stalls.
2.1.1 Dynamic Compilation, Adaptive Execution and
Code Specialization
Dynamic compilation and adaptive execution are emerging
directions for compiler research which provide improved
execution performance by delaying part of the compilation
process to run-time. These techniques range from filling
in compiler generated specialized templates at run-time to
fully adaptive code generation. For these techniques to be
effective the compiler must determine which sections of
code to concentrate on for the adaptive execution. Existing
techniques for dynamic compilation and adaptive execution
require the user to identify run-time invariants using
user guided annotations [1, 3, 4, 7, 8]. One of the goals
of value profiling is to provide an automated approach for
identifying semi-invariant variables and to use this to guide
dynamic compilation and adaptive execution.
Staging analysis has been proposed by Lee and
Leone [8] and Knoblock and Ruf [7] as an effective means
for determining which computations can be performed
early by the compiler and which optimizations should be
performed late or postponed by the compiler for dynamic
code generation. Their approach requires programmers to
provide hints to the staging analysis to determine what arguments
have semi-invariant behavior. In addition, Autrey
and Wolfe have started to investigate a form of staging
analysis for automatic identification of semi-invariant variables
[2]. Consel and Noel [3] use partial evaluation techniques
to automatically generate templates for run-time
code generation, although their approach still requires the
user to annotate arguments of the top-level procedures,
global variables and a few data structures as run-time con-
stants. Auslander et.al. [1] proposed a dynamic compilation
system that uses a unique form of binding time analysis
to generate templates for code sequences that have
been identified as semi-invariant. Their approach currently
uses user defined annotations to indicate which variables
are semi-invariant.
The annotations needed to drive the above techniques
require the identification of semi-invariant variables, and
value profiling can be used to automate this process. To
automate this process, these approaches can use their current
techniques for generating run-time code to identify
code regions that could potentially benefit from run-time
code generation. Value profiling can then be used to determine
which of these code regions have variables with semi-
invariant behavior. Then only those code regions identified
as profitable by value profiling would be candidates for dynamic
compilation and adaptive execution.
The above approaches used for dynamic compilation,
to determine optimizable code regions, can also be applied
to static optimization. These regions can benefit from code
specialization if a variable or instruction has the same value
across multiple inputs. If this is the case, the code could be
duplicated creating a specialized version optimized to treat
the variable as a constant. The execution of the specialized
code would then be conditioned on that value. Value profiling
can be used to determine if these potential variables
or instructions have the same value across multiple inputs,
in order to guide code specialization.
2.1.2 Hardware-based Optimizations
In predicting the most recent value(s) seen, an instruction's
future value has been shown to have good predictability
using tag-less hardware buffers [9, 10]. Our results show
that value profiling can be used to classify the invariance
of instructions, so a form of value profiling could potentially
be used to improve hardware value prediction. Instructions
that are shown to be variant can be kept out of
the value prediction buffer, reducing the number of conflicts
and aliasing effects, resulting in a more accurate prediction
using smaller tables. Instructions shown to have a
high invariance with the value profiler could even be given
a "sticky" replacement policy.
The Memory Disambiguation Buffer [6] (MDB) is an
architecture that allows a load and its dependent instructions
to be hoisted out of a loop, by checking if store addresses
inside the loop conflict with the load. If a store
inside the loop is to the same address, the load and its dependent
instructions are re-executed. A similar hardware
mechanism can be used to take advantage of values, by
checking not only the store address, but also its value. In
this architecture, only when the value of the load hoisted
out of the loop changes should the load and its dependent
instructions be re-executed. Value profiling can be used to
identify these semi-invariant load instructions.
3 Value Profiling
In this section we will discuss a straight forward approach
to value profiling. This study concentrates on profiling at
the instruction level; finding the invariance of the written
register values for instructions. The value profiling information
at this level can be directly mapped back to the
corresponding variables by the compiler for optimization.
There are two types of information needed for value profiling
to be used for compiler optimizations: (1) how invariant
is an instruction's value over the life-time of the
program, and (2) what were the top N result values for an
instruction.
Determining the invariance of an instruction's resulting
value can be calculated in many different ways. The value
prediction results presented by Lipasti et al. [9, 10] used
a tag-less table to store a cache of the most recently used
values to predict the next result value for an instruction.
Keeping track of the number of correct predictions equates
to the number of times an instruction's destination register
void InstructionProfile::collect stats (Reg cur value) f
total executed ++;
if (cur value == last value) f
num times profiled ++;
else f
LFU insert into tnv table(last value, num times profiled);
last
Figure
1: A simple value profiler keeping track of the N
most frequent occurring values, along with the most recent
value (MRV-1) metric.
was assigned a value that was the most recent value or one
of the most recent M values. We call this the Most Recent
is the history depth
of the most recent values kept.
The MRV metric provides an indication of the temporal
reuse of values for an instruction, but it does not equate exactly
to the invariance of an instruction. By Invariance - M
(Inv-M) we mean the percent of time an instruction spends
executing its most frequent M values. For example, an instruction
may write a register with values X and Y in the
following repetitive pattern :::XY XYXYXY:::. This pattern
would result in a MRV-1 (which stores only the most
recent value) of 0%, but the instruction has an invariance
Inv-1 of 50% and Inv-2 of 100%. Another example is when
1000 different values are the result of an instruction each
100 times in a row before switching to the next value. In
this case the MRV-1 metric would determine that the variable
used its most recent value 99% of the time, but the
instruction has only a 0.1% invariance for Inv-1. The MRV
differs from invariance because it does not have state associated
with each value indicating the number of times the
value has occurred. Therefore, the replacement policy it
uses, least recently used, cannot tell what value is the most
common. We found the MRV metric is at times a good
prediction of invariance, but at other times it is not because
of the examples described above.
3.1 A Value Profiler
The value profiling information required for compiler optimization
ranges from needing to know only the invariance
of an instruction to also having to know the top N values
or a popular range of values. Figure 1 shows a simple profiler
to keep track of this information in pseudo-code. The
value profiler keeps a Top N Value (TNV) table for the register
being written by an instruction. Therefore, there is a
TNV table for every register being profiled. The TNV table
stores (value, number of occurrences) pairs for each entry
with a least frequently used (LFU) replacement policy.
When inserting a value into the table, if the entry already
exists its occurrence count is incremented by the number
of recent profiled occurrences. If the value is not found,
the least frequently used entry is replaced.
3.2 Replacement Policy for Top N Value Table
We chose not to use an LRU replacement policy, since replacing
the least recently used value does not take into consideration
number of occurrences for that value. Instead
we use a LFU replacement policy for the TNV table. A
straight forward LFU replacement policy for the TNV table
can lead to situations where an invariant value cannot
make its way into the TNV table. For example, if the TNV
table already contains N entries, each profiled more than
once, then using a least frequently used replacement policy
for a sequence of :::XY XYXYXY::: (where X and Y are
not in the table) will make X and Y battle with each other
to get into the TNV table, but neither will succeed. The
TNV table can be made more forgiving by either adding a
"temp" TNV table to store the current values for a specified
time period which is later merged into a final TNV
table, or by just clearing out the bottom entries of the TNV
table every so often. In this paper we use the approach of
clearing out the bottom half of the TNV table after profiling
the instruction for a specified clear-interval. After an
instruction has been profiled more than the clear-interval,
the bottom half of the table is cleared and the clear-interval
counter is reset.
We made the number of times sampled for the clear-
interval dependent upon the frequency of the middle entry
in the TNV table. This middle entry is the LFU entry in
the top half of the table. The clear-interval needs to be
larger than the frequency count of this entry, otherwise a
new value could never work its way into the top half of
the table. In our profiling, we set the clear-interval to be
twice the frequency of the middle entry each time the table
is cleared, with a minimum clear-interval of 2000 times.
4 Evaluation Methodology
To perform our evaluation, we collected information for
the SPEC95 programs. The programs were compiled on a
DEC Alpha AXP-21164 processor using the DEC C and
FORTRAN compilers. We compiled the SPEC benchmark
suite under OSF/1 V4.0 operating system using full compiler
optimization (-O4 -ifo). Table 1 shows the two
data sets we used in gathering results for each program,
and the number of instructions executed in millions.
We used ATOM [11] to instrument the programs and
gather the value profiles. The ATOM instrumentation tool
has an interface that allows the elements of the program
executable, such as instructions, basic blocks, and proce-
dures, to be queried and manipulated. In particular, ATOM
Data Set 1 Data Set 2
Program Name Exe M Name Exe M
compress ref 93 short 9
gcc 1cp-decl 1041 1stmt 337
ijpeg specmun 34716 vigo 39483
li ref (w/o puzzle) 18089 puzzle 28243
perl primes 17262 scrabble 28243
vortex ref 90882 train 3189
applu ref 46189 train 265
apsi ref 29284 train 1461
fpppp ref 122187 train 234
hydro2d ref 42785 train 4447
mgrid ref 69167 train 9271
su2cor ref 33928 train 10744
tomcatv ref 27832 train 4729
turb3d ref 81333 train 8160
wave5 ref 29521 train 1943
Table
1: Data sets used in gathering results for each pro-
gram, and the number of instructions executed in millions
for each data set.
allows an "instrumentation" program to navigate through
the basic blocks of a program executable, and collect information
about registers used, opcodes, branch conditions,
and perform control-flow and data-flow analysis.
5 Invariance of Instructions
This section examines the invariance and predictability of
values for instruction types, procedure parameters, and
loads. When reporting invariance results we ignored instructions
that do not need to be executed for the correct execution
of the program. This included a reasonable number
of loads for a few programs. These loads can be ignored
since they were inserted into the program for code alignment
or prefetching for the DEC Alpha 21164 processor.
For the results we used two sizes for the TNV table
when profiling. For the breakdown of the invariance for
the different instruction types (Table 2), we used a TNV
table of size 50. For all the other results we used a TNV
table of size 10 for each instruction (register).
5.1 Metrics
We now describe some of the metrics we will be using
throughout the paper. When an instruction is said to have
an "Invariance-M" of X%, this is calculated by taking the
number of times the top M values for the instruction occurred
during profiling, as found in the final TNV table after
profiling, and dividing this by the number of times the
instruction was executed (profiled).
In order to examine the invariance for an instruction we
look at Inv-1 and Inv-5. For Inv-1, the frequency count of
Program ILd FLd LdA St IMul FMul FDiv IArth FArth Cmp Shft CMov FOps
compress 44(27) 0(
li
perl 70(24) 54(
vortex
hydro2d 76(
su2cor 37(
turb3d 54(
Avg
Table
2: Breakdown of invariance by instruction types. These categories include integer loads (ILd), floating point loads
load address calculations (LdA), stores (St), integer multiplication (IMul), floating point multiplication
floating point division (FDiv), all other integer arithmetic (IArth), all other floating point arithmetic
(Cmp), shift (Shft), conditional moves (CMov), and all other floating point operations (FOps). The first number shown is
the percent invariance of the top most value (Inv-1) for a class type, and the number in parenthesis is the dynamic execution
frequency of that type. Results are not shown for instruction types that do not write a register (e.g., branches).
the most frequently occurring value in the final TNV table
is divided by the number of times the instruction was
profiled. For Inv-5, the number of occurrences for the top
5 values in the final TNV table are added together and divided
by the number of times the instruction was profiled.
When examining the difference in invariance between
the two profiles, for either the two data sets or between the
normal and convergent profile, we examine the difference
in invariance and the difference in the top values encountered
for instructions executed in both profiles. Diff-1 and
Diff-5 are used show the weighted difference in invariance
between two profiles for the top most value in the TNV
table and the top 5 values. The difference in invariance
is calculated on an instruction by instruction basis and is
included into a weighted average based on the first input,
for only instructions that are executed in both profiles. The
metric Same-1 shows the percent of instructions profiled
in the first profile that had the same top value in the second
profile. To calculate Same-1 for an instruction, the top
value in the TNV table for the first profile is compared to
the top value in the second profile. If they are the same,
then the number of times that value occurred in the TNV
table for the first profile is added to a sum counter. This
counter is then divided by the total number of times these
instructions were profiled based on the first input. Two
other metrics, Find-1 and Find-5, are calculated in a similar
manner. They show the percent of time the top 1 element
or the top 5 elements in the first profile for an instruction
appear in the top 5 values for that instruction in the second
profile.
When calculating the results for Same-1, Find-1, and
Find-5 we only look at instructions whose invariance in
the first profile are greater than 30%. The reason for only
looking at instructions with an Inv-1 invariance larger than
30% is to ignore all the instructions with random invari-
ance. For variant instructions there is a high likelihood
that the top values in the two profiles are different, and
we are not interested in these instructions. Therefore, we
arbitrarily chose 30% since it is large enough to avoid variant
instructions when looking at the top 5 values. For these
results two numbers are shown, the first number is the percent
match in values found between the two profiles, and
the second number in parenthesis is the percent of profiled
instructions the match corresponds to because of the 30%
invariance filter. Therefore, the number in parenthesis is
the percent of instructions profiled that had an invariance
greater than 30%.
When comparing the two different data sets, Overlap
represents the percent of instructions, weighted by execu-
tion, that were profiled in the first data set that were also
profiled in the second data set.
5.2 Breakdown of Instruction Type Invariance
Table
2 shows the percent invariance for each program broken
down into 14 different and disjoint instruction categories
using data set 1. The first number represents the
average percent invariance of the top value (Inv-1) for a
given instruction type. The number next to it in parenthesis
is the percent of executed instructions that this class
Data Set 1 Data Set 2 Comparing Params in Data Set 1 to Data Set 2
Procedure Calls Params Params Over- Invariance Top Values
Program %Instr 30% 50% 70% 90% Inv1 Inv5 Inv1 Inv5 lap diff1 diff5 same1 find1 find5
compress
gcc 1.23 54 48 34 17 31 43 31 43
li 2.45
perl 1.23 54
hydro2d
mgrid
su2cor
average 0.53 73 67 61 44 54 69 54 70
Table
3: Invariance of parameter values and procedure calls. Instr is the percent of executed instructions that are procedure
calls. The next four columns show the percent of procedure calls that had at least one parameter with an Inv-1 invariance
greater than 30, 50, 70 and 90%. The rest of the metrics are in terms of parameters and are described in detail in x5.1.
type accounts for when executing the program. For the
store instructions, the invariance reported is the invariance
of the value being stored. The results show that for the integer
programs, that the integer loads (ILd), the calculation
of the load addresses (LdA), and the integer arithmetic instructions
have a high degree of invariance and are
frequently executed. For the floating point instructions the
invariance found for the types are very different from one
program to the next. Some programs mgrid, swim, and
tomcatv show very low invariance, while hydro2d has
very invariant instructions.
5.3 Invariance of Parameters
Specializing procedures based on procedure parameters is
a potentially beneficial form of specialization, especially if
the code is written in a modular fashion for general purpose
use, but is used in a very specialized manner for a given run
of an application.
Table
3 shows the predictability of parameters. Instr
shows the percent of instructions executed which were procedure
calls for data set 1. The next four columns show the
percent of procedure calls that had at least one parameter
with an Inv-1 invariance greater than 30, 50, 70, and 90%.
These first five columns show results in terms of proce-
dures, and the remaining columns show results in terms of
parameter invariance and values. The remaining metrics
are described in detail in x5.1. The results show that the
invariance of parameters is very predictable between the
different input sets. The Table also shows that on average
the top value for 44% of the parameters executed (passed
to procedures) for data set 1 had the same value 84% of the
time when that same parameter was passed in a procedure
for the second data set.
5.4 Invariance of Loads
The graphs in Figure 2 show the invariance for loads in
terms of the percent of dynamically executed loads in each
program. The left graph shows the percent invariance
calculated for the top value (Inv-1) in the final 10 entry
TNV table for each instruction, and the right graph shows
the percent invariance for the top 5 values (Inv-5). The
invariance shown is non-accumulative, and the x-axis is
weighted by frequency of execution. Therefore, if we were
interested in optimizing all instructions that had an Inv-1
invariance greater than 50% for li, this would account for
around 40% of the executed loads. The Figure shows that
some of the programs compress, vortex, m88ksim,
and perl have 100% Inv-1 invariance for around 50%
of their executed loads, and m88ksim and perl have a
100% Inv-5 invariance for almost 80% of their loads. It is
interesting to note from these graphs the bi-modal nature
of the load invariance for many of the programs. Most of
the loads are either completely invariant or very variant.
Table
4 shows the value invariance for loads. The invariance
Inv-1 and Inv-5 shown in this Table for data set 1
is the average of the invariance shown in Figure 2. Mrv-1
is the percentage of time the most recent value was the next
value encountered by the load. Diff M/I is the weighted difference
in Mrv-1 and Inv-1 percentages on an instruction
by instruction basis. The rest of the metrics are described
in x5.1. The results show that the MRV-1 metric has a 10%
difference in invariance on average, but the difference is
Percent Executed Loads
Percent
Invariance
for
compress
gcc
go
ijpeg
li
perl
vortex
applu
apsi
hydro2d
mgrid
su2cor
turb3d
Percent Executed Loads
Percent
Invariance
for
Inv-5
Figure
2: Invariance of loads. The graph on the left shows the percent invariance of the top value (Inv-1) in the TNV table,
and graph on the right shows the percent invariance of the top 5 values (Inv-5) in the TNV table. The percent invariance
is shown on the y-axis, and the x-axis is the percent of executed load instructions. The graph is formed by sorting all the
instructions by their invariance, and then putting the instructions into 100 buckets filling the buckets up based on each load's
execution frequency. Then the average invariance, weighted by execution frequency, of each bucket is graphed.
Comparing Data Set 1 and Data Set 2
Data Set 1 Data Set 2 % Invariance Top Values
Program Mrv1 Inv1 Inv5 diff M/I Mrv1 Inv1 Inv5 diff M/I Overlap diff1 diff5 same1 find1 find5
compress
go
ijpeg 26 28 47 19 26
li 37
perl
vortex
28
hydro2d
su2cor
turb3d 36 38 48 8 40 42 52 8
average 38
Table
4: Invariance of load values using a TNV table of size 10. Mrv1 is the average percent of time the current value for a
load was the last value for the load. Diff M/I is the difference between Mrv1 and Inv1 calculated instruction by instruction.
The rest of the metrics are described in detail in x5.1.
large for a few of the programs. The difference in invariance
of instructions between data sets is very small. The
results show that 27% of the loads executed in both data
sets (using the 30% invariance filter) have the same top invariant
value 90% of the time. Not only is the invariance
between inputs similar, but a certain percentage (24%) of
their values are the same.
The clearing interval and table size parameters we used
affect the top values found for the TNV table more than
the invariance. When profiling the loads with a 10 entry
TNV table, if clearing the bottom half of the table is turned
off, the average results showed a 1% difference in invariance
and the top value was different 8% of the time in each
TNV table using the 30% filter. In examining different table
sizes (with clearing on), a TNV table of size 4 had on
average a 1% difference in invariance from a TNV table of
size 10, and the top value found was different 2% of the
time. When using a table size of 50 for the load profile, on
average there was a 0% difference in invariance and the top
value was different 4% of the time when compared to the
entry TNV table when only examining loads that had an
invariance above 30%.
6 Estimating Invariance
Out of all the instructions, loads are really the "unknown
quantity" when dealing with a program's execution. If the
value and invariance for all loads are known, then it is reasonable
to believe that the invariance and values for many
of the other instructions can be estimated through invariance
and value propagation. This would significantly reduce
the profiling time needed to generate a value profile
for all instructions.
To investigate this, we used the load value profiles from
the previous section, and propagated the load invariance
through the program using data flow and control flow analysis
deriving an invariance for the non-load instructions
that write a register. We achieved reasonable results using
a simple inter-procedural analysis algorithm. Our estimation
algorithm first builds a procedure call graph, and
each procedure contains a basic block control flow graph.
To propagate the invariance, each basic block has an OUT
RegMap associated with it, which contains the invariance
of all the registers after processing the basic block. When
a basic block is processed, the OUT RegMaps of all of its
predecessors in the control flow graph are merged together
and are used as the IN RegMap for that basic block. The
RegMap is then updated processing each instruction in the
basic block to derive the OUT RegMap for the basic block.
To calculate the invariance for the instructions within a
basic block we developed a set of simple heuristics. The
default heuristic used for instructions with two input registers
is to set the def register invariance to the invariance of
first use register times the invariance of second use register.
If one of the two input registers is undefined, the invariance
of def register is left undefined in the RegMap. For
instructions with only one input register (e.g., MOV), the
invariance of the def register is assigned the invariance of
the use. Other heuristics used to propagate the invariance
included the loop depth, induction variables, stack pointer,
and special instructions (e.g., CMOV), but for brevity we
will not go into these.
Table
5 shows the invariance using our estimation algorithm
for non-load instructions that write a register. The
second column in the table shows the percent of executed
instructions to which these results apply. The third column
Prof shows the overall invariance (Inv-1) for these instructions
using the profile used to form Table 2. The fourth
column is the overall estimated invariance for these instruc-
tions, and the fifth column is the weighted difference in invariance
Inv-1 between the real profile and the estimation
on an instruction by instruction basis. The next 7 columns
show the percent of executed instructions that have an average
invariance above the threshold of 10, 30, 50, 60, 70,
and 90%. Each column contains three numbers, the first
number is the percent of instructions executed that had an
invariance above the threshold. The second number
is the percent of these invariant instructions that the estimation
also classified above the invariant threshold. The last
number in the column shows the percent of these instructions
(normalized to the invariant instructions found above
the threshold) the estimation thought were above the invariant
threshold, but were not. Therefore, the last number
in the column is the normalized percent of instructions that
were over estimated. The results show that our estimated
propagation has an 8% difference on average in invariance
from the real profile. In terms of actually classifying variables
above an invariant threshold our estimation finds 83%
of the instructions that have an invariance of 60% or more,
and the estimation over estimates the invariant instructions
above this threshold by 7%.
Our estimated invariance is typically lower than the real
profile. There are several reasons for this. The first is the
default heuristic which multiplies the invariance of the two
uses together to arrive at the invariance for the def. At
times this estimation is correct, although a lot of time it
provides a conservative estimation of the invariance for the
written register. Another reason is that at times the two
uses for an instruction were variant but their resulting computation
was invariant. This was particularly true for logical
instructions (e.g, AND, OR, Shift) and some arithmetic
instructions.
7 Convergent Value Profiling
The amount of time a user will wait for a profile to be generated
will vary depending on the gains achievable from
using value profiling. The level of detail required from a
% of % Inv-1 % Instructions Found Above Invariance Threshold
Program Instrs Prof Est Diff-1 10% 30% 50% 60% 70% 80% 90%
compress 50
go
ijpeg 71
li
mgrid
su2cor
turb3d 56
average 53 29 24 8 20 (75, 5) 17 (76,
Table
5: Invariance found for instructions computed by propagating the invariance from the load value profile. Instrs shows
the percent of instructions which are non-load register writing instructions to which the results in this table apply. Prof and
Est are the the percent invariance found for the real profile and the estimated profile. Diff-1 is the percent difference between
the profile and estimation. The last 7 columns show the percent of executed instructions that have an average invariance
above the threshold of 10, 30, 50, 60, 70, 80 and 90%, and the percentage of these that the estimation profile found and the
percent that were over estimated.
value profiler determines the impact on the time to profile.
The problem with a straight forward profiler, as shown in
Figure
1, is it could run hundreds of times slower than the
original application, especially if all of the instructions are
profiled. One solution we propose in this paper is to use a
somewhat intelligent profiler that realizes the data (invari-
ance and top N values) being profiled is converging to a
steady state and then profiling is turned off on an instruction
by instruction basis.
In examining the value invariance of instructions, we
noticed that most instructions converge in the first few percent
of their execution to a steady state. Once this steady
state is reached, there is no point to further profiling the
instruction. By keeping track of the percent change in invariance
one can classify instructions as either "converged"
or "changing". The convergent profiler stops profiling the
instructions that are classified as converged based on a convergence
criteria. This convergence criteria is tested after
a given time period (convergence-interval) of profiling the
instruction.
To model this behavior, the profiling code is conditioned
on a boolean to test if profiling is turned off or on
for an instruction. If profiling is turned on, normal profiling
occurs, and after a given convergence interval the convergence
criteria is tested. The profiling condition is then set
to false if the profile has converged for the instruction. If
profiling is turned off, periodically the execution counter
is checked to see if a given retry time period has elapsed.
When profiling is turned off the retry time period is set to
a number total executed backoff , where back-off can
either be a constant or a random number. This is used to
periodically turn profiling back on to see if the invariance
is at all changing.
In this paper we examine the performance of two
heuristics for the convergence criteria for value profiling.
The first heuristic concentrates on the instructions with an
increasing invariance. For instructions whose invariance
is changing we are more interested in instructions that are
increasing their final invariance than those that are decreasing
their final invariance for compiler optimization pur-
poses. Therefore, we continue to profile the instruction's
whose final invariance is increasing, but choose to stop
profiling those instructions whose invariance is decreas-
ing. When the percent invariance for the convergence test
is greater than the percent invariance in the previous inter-
val, then the invariance is increasing so profiling contin-
ues. Otherwise, profiling is stopped. When calculating the
invariance the total frequency of the top half of the TNV
table is examined. For the results, we use a convergence-
interval for testing the criteria of 2000 instruction executions
The second heuristic examined for the convergence cri-
teria, is to only continue profiling if the change in invariance
for the current convergence interval is greater than an
inv-increase bound or lower than an inv-decrease bound.
If the percent invariance is changing above or below these
bounds, profiling continues. Otherwise profiling stops
because the invariance has converged to be within these
bounds.
Convergent Profile Comparing Full Load Profile to Convergent
Convergence % Invariance Invariance Top Values
Program Prof % Conv % Inc Backoff % Inv-1 % Inv-5 % diff-1 % diff-5 % same-1 % find-1 % find-5
compress
li
Table
Convergent profiler, where profiling continues if invariance is increasing, otherwise it is turned off. Prof is percent
of time the executable was profiled. Conv and Inc are the percent of time the convergent criteria decided that the invariance
had converged or was still increasing. Backoff is the percent of time spent profiling after turning profiling back on.
7.1 Performance of the Convergent Profiler
Table
6 shows the performance of the convergent profiler,
which stops profiling the first instance the change in invariance
decreases. The second column, percent of instructions
profiled, shows the percentage of time profiling was
turned on for the program's execution. The third column
(Conv) shows the percent of time profiling converged when
the convergence criteria was tested, and the next column
(Inc) is the percent of time the convergence test decided
that the invariance was increasing. The fifth column (Back-
off) shows the percent of time spent profiling after turning
profiling back on using the retry time period. The rest of
the metrics are described in x5.1 and they compare the results
of profiling the loads for the program's complete execution
to the convergent profile results. The results show
that on average convergent profiling spent 2% of its time
profiling and profiling was turned off for the other 98% of
the time. In most of the programs the time to converge was
1% or less. gcc was the only outlier, taking 24% of its
execution to converge. The reason is gcc executes more
than 60,000 static load instructions for our inputs and many
of these loads do not execute for long. Therefore, most of
these loads were fully profiled since their execution time fit
within the time interval of sampling for convergence (2000
invocations). These results show that the convergent pro-
filer's invariance differed by only 10% from the full profile,
and we were able to find the top value of the full length profile
in the top 5 values in the convergent profile 98% of the
time.
Table
7 shows the performance of the convergent pro-
filer, when using the upper and lower change in invariance
Convergence Back- Invariance
Program Prof Conv Inc Dec off diff1 diff5
compress
gcc
li
perl 0 43 38 19 19 3 0
average 4 22 28 50 57 3 0
Table
7: Convergent profiler, where profiling continues as
long as the change in invariance is either above the inv-
increase or below the inv-decrease bound. The new column
Dec shows the percent of time the invariance was decreasing
when testing for convergence.
bounds for determining convergence. A new column (Dec)
shows the percent of time the test for convergence decided
to continue profiling because the invariance was decreas-
ing. For these results we use an inv-increase threshold of
2% and an inv-decrease threshold of 4%. If the invariance
is not increasing by more than 2%, or decreasing by more
than 4% then profiling is turned off. The results show that
this heuristic spends more time profiling, 4% on average,
but has a lower difference in invariance (3%) in comparison
to the first heuristic (10%). In terms of values this new
heuristic only increased the matching of the top values by
1%. Therefore, the only advantage of using this second
heuristic is to obtain a more accurate invariance. Table 7
shows a lot of the time is spent on profiling the decrease
in invariance. The reason is that a variant instruction can
start out looking invariant with just a couple of values at
first. It then can take awhile for the overall invariance of
the instruction to reach its final variant behavior. The results
also show that more of the time profiling, 57%, is
spent after profiling is turned back on than using our first
convergence criteria, 24%.
One problem is that after an instruction is profiled for
a long time, it takes awhile for its overall invariance to
change. If the invariance for an instruction converges after
profiling for awhile and then it changes into a new steady
state, it will take a lot of profiling to bring the overall invariance
around to the new steady state. One possible solution
is to monitor if this is happening, and if so dump the
current profile information and start a new TNV table for
the instruction. This would then converge faster to the new
steady state. Examining this, sampling techniques, and
other approaches to convergent profiling is part of future
research.
Summary
In this paper we explored the invariant behavior of values
for loads, parameters, and all register defining instructions.
The invariant behavior was identified by a value profiler,
which could then be used to automatically guide compiler
optimizations and dynamic code generation.
We showed that value profiling is an effective means
for finding invariant and semi-invariant instructions. Our
results show that the invariance found for instructions,
when using value profiling, is very predictable even between
different input sets. In addition we examined two
techniques for reducing the profiling time to generate a
value profile. The first technique used the load value profile
to estimate the invariance for all non-load instructions
with an 8% invariance difference from a real profile. The
second approach we proposed for reducing profiling time,
is the idea of creating a convergent profiler that identifies
when profiling information reaches a steady state and has
converged. The convergent profiler we used for loads, profiled
for only 2% of the program's execution on average,
and recorded an invariance within 10% of the full length
profiler and found the top values 98% of the time. The
idea of convergent profiling proposed in this paper can potentially
be used for decreasing the profiling time needed
for other types of detailed profilers.
We view value profiling as an important part of future
compiler research, especially in the areas of dynamic compilation
and adaptive execution, where identifying invariant
or semi-invariant instructions at compile time is es-
sential. A complementary approach for trying to identify
semi-invariant variables, is to use data-flow and staging
analysis to try and prove that a variable's value will not
change often or will hold only a few values over the life-time
of the program. This type of analysis should be used
in combination with value profiling to identify optimizable
code regions.
Acknowledgments
We would like to thank Jim Larus, Todd Austin, Florin
Baboescu, Barbara Kreaseck, Dean Tullsen, and the
anonymous reviewers for providing useful comments. This
work was funded in part by UC MICRO grant No. 97-018,
DEC external research grant No. US-0040-97, and a generous
equipment and software grant from Digital Equipment
Corporation.
--R
Initial results for glacial variable analy- sis
A general approach for run-time specialization and its application to C
Speculative execution based on value prediction.
Dynamic memory disambiguation using the memory conflict buffer.
Data specialization.
Optimizing ml with run-time code generation
Exceeding the dataflow limit via value prediction.
Value locality and load value prediction.
ATOM: A system for building customized program analysis tools.
--TR
ATOM
Dynamic memory disambiguation using the memory conflict buffer
Optimizing ML with run-time code generation
Fast, effective dynamic compilation
Data specialization
Value locality and load value prediction
C: a language for high-level, efficient, and machine-independent dynamic code generation
A general approach for run-time specialization and its application to C
Exceeding the dataflow limit via value prediction
Initial Results for Glacial Variable Analysis
--CTR
Dean M. Tullsen , John S. Seng, Storageless value prediction using prior register values, ACM SIGARCH Computer Architecture News, v.27 n.2, p.270-279, May 1999
Characterization of value locality in Java programs, Workload characterization of emerging computer applications, Kluwer Academic Publishers, Norwell, MA, 2001
Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed dynamic computation reuse: rationale and initial results, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.158-169, November 16-18, 1999, Haifa, Israel
Tarun Nakra , Rajiv Gupta , Mary Lou Soffa, Value prediction in VLIW machines, ACM SIGARCH Computer Architecture News, v.27 n.2, p.258-269, May 1999
Brian Grant , Matthai Philipose , Markus Mock , Craig Chambers , Susan J. Eggers, An evaluation of staged run-time optimizations in DyC, ACM SIGPLAN Notices, v.34 n.5, p.293-304, May 1999
Chia-Hung Liao , Jong-Jiann Shieh, Exploiting speculative value reuse using value prediction, Australian Computer Science Communications, v.24 n.3, p.101-108, January-February 2002
Glenn Reinman , Brad Calder , Dean Tullsen , Gary Tyson , Todd Austin, Classifying load and store instructions for memory renaming, Proceedings of the 13th international conference on Supercomputing, p.399-407, June 20-25, 1999, Rhodes, Greece
Youfeng Wu, Efficient discovery of regular stride patterns in irregular programs and its use in compiler prefetching, ACM SIGPLAN Notices, v.37 n.5, May 2002
Kevin M. Lepak , Mikko H. Lipasti, On the value locality of store instructions, ACM SIGARCH Computer Architecture News, v.28 n.2, p.182-191, May 2000
Yonghua Ding , Zhiyuan Li, A Compiler Scheme for Reusing Intermediate Computation Results, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.279, March 20-24, 2004, Palo Alto, California
Brian Grant , Markus Mock , Matthai Philipose , Craig Chambers , Susan J. Eggers, The benefits and costs of DyC's run-time optimizations, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.5, p.932-972, Sept. 2000
Daniel A. Connors , Hillery C. Hunter , Ben-Chung Cheng , Wen-mei W. Hwu, Hardware support for dynamic activation of compiler-directed computation reuse, ACM SIGOPS Operating Systems Review, v.34 n.5, p.222-233, Dec. 2000
Lee Lin , Michael D. Ernst, Improving the adaptability of multi-mode systems via program steering, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Youfeng Wu , Dong-Yuan Chen , Jesse Fang, Better exploration of region-level value locality with integrated computation reuse and value prediction, ACM SIGARCH Computer Architecture News, v.29 n.2, p.98-108, May 2001
Kevin M. Lepak , Gordon B. Bell , Mikko H. Lipasti, Silent Stores and Store Value Locality, IEEE Transactions on Computers, v.50 n.11, p.1174-1190, November 2001
Jun Yang , Rajiv Gupta, Energy efficient frequent value data cache design, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Jun Yang , Rajiv Gupta, Frequent value locality and its applications, ACM Transactions on Embedded Computing Systems (TECS), v.1 n.1, p.79-105, November 2002
Daniel A. Connors , Hillery C. Hunter , Ben-Chung Cheng , Wen-Mei W. Hwu, Hardware support for dynamic activation of compiler-directed computation reuse, ACM SIGPLAN Notices, v.35 n.11, p.222-233, Nov. 2000
Avinash Sodani , Gurindar S. Sohi, An empirical analysis of instruction repetition, ACM SIGOPS Operating Systems Review, v.32 n.5, p.35-45, Dec. 1998
Markus Mock , Craig Chambers , Susan J. Eggers, Calpa: a tool for automating selective dynamic compilation, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.291-302, December 2000, Monterey, California, United States
Lian Li , Jingling Xue, A trace-based binary compilation framework for energy-aware computing, ACM SIGPLAN Notices, v.39 n.7, July 2004
Sebastian Elbaum , Madeline Hardojo, An empirical study of profiling strategies for released software and their impact on testing activities, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Kameswari V. Garigipati , Cindy Norris, Evaluating the use of profiling by a region-based register allocator, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Vijay Sundaresan , Daryl Maier , Pramod Ramarao , Mark Stoodley, Experiences with Multi-threading and Dynamic Class Loading in a Java Just-In-Time Compiler, Proceedings of the International Symposium on Code Generation and Optimization, p.87-97, March 26-29, 2006
M. Burrows , U. Erlingson , S.-T. A. Leung , M. T. Vandevoorde , C. A. Waldspurger , K. Walker , W. E. Weihl, Efficient and flexible value sampling, ACM SIGPLAN Notices, v.35 n.11, p.160-167, Nov. 2000
Brad Calder , Glenn Reinman , Dean M. Tullsen, Selective value prediction, ACM SIGARCH Computer Architecture News, v.27 n.2, p.64-74, May 1999
M. Burrows , U. Erlingson , S-T. A. Leung , M. T. Vandevoorde , C. A. Waldspurger , K. Walker , W. E. Weihl, Efficient and flexible value sampling, ACM SIGOPS Operating Systems Review, v.34 n.5, p.160-167, Dec. 2000
Sebastian Elbaum , Madeline Diep, Profiling Deployed Software: Assessing Strategies and Testing Opportunities, IEEE Transactions on Software Engineering, v.31 n.4, p.312-327, April 2005
Matthew C. Merten , Andrew R. Trick , Christopher N. George , John C. Gyllenhaal , Wen-mei W. Hwu, A hardware-driven profiling scheme for identifying program hot spots to support runtime optimization, ACM SIGARCH Computer Architecture News, v.27 n.2, p.136-147, May 1999
Chao-Ying Fu , Matthew D. Jennings , Sergei Y. Larin , Thomas M. Conte, Value speculation scheduling for high performance processors, ACM SIGOPS Operating Systems Review, v.32 n.5, p.262-271, Dec. 1998
K. V. Seshu Kumar, Value reuse optimization: reuse of evaluated math library function calls through compiler generated cache, ACM SIGPLAN Notices, v.38 n.8, August
Ann Gordon-Ross , Frank Vahid, Frequent loop detection using efficient non-intrusive on-chip hardware, Proceedings of the international conference on Compilers, architecture and synthesis for embedded systems, October 30-November 01, 2003, San Jose, California, USA
Zhang , Rajiv Gupta, Whole Execution Traces, Proceedings of the 37th annual IEEE/ACM International Symposium on Microarchitecture, p.105-116, December 04-08, 2004, Portland, Oregon
Abhinav Das , Jiwei Lu , Howard Chen , Jinpyo Kim , Pen-Chung Yew , Wei-Chung Hsu , Dong-Yuan Chen, Performance of Runtime Optimization on BLAST, Proceedings of the international symposium on Code generation and optimization, p.86-96, March 20-23, 2005
Shashidhar Mysore , Banit Agrawal , Timothy Sherwood , Nisheeth Shrivastava , Subhash Suri, Profiling over Adaptive Ranges, Proceedings of the International Symposium on Code Generation and Optimization, p.147-158, March 26-29, 2006
Roman Lysecky , Susan Cotterell , Frank Vahid, A fast on-chip profiler memory, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Martin Burtscher , Benjamin G. Zorn, Hybrid Load-Value Predictors, IEEE Transactions on Computers, v.51 n.7, p.759-774, July 2002
Ramon Canal , Antonio Gonzlez , James E. Smith, Software-Controlled Operand-Gating, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.125, March 20-24, 2004, Palo Alto, California
Kevin M. Lepak , Mikko H. Lipasti, Silent stores for free, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.22-31, December 2000, Monterey, California, United States
Craig Chambers, Staged compilation, ACM SIGPLAN Notices, v.37 n.3, March 2002
Zhang , Rajiv Gupta, Whole execution traces and their applications, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.3, p.301-334, September 2005
Lixin Su , Mikko H. Lipasti, Dynamic Class Hierarchy Mutation, Proceedings of the International Symposium on Code Generation and Optimization, p.98-110, March 26-29, 2006
Jeremy W. Nimmer , Michael D. Ernst, Automatic generation of program specifications, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Michael D. Ernst , Jake Cockrell , William G. Griswold , David Notkin, Dynamically Discovering Likely Program Invariants to Support Program Evolution, IEEE Transactions on Software Engineering, v.27 n.2, p.99-123, February 2001
Ben-Chung Cheng , Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed early load-address generation, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.138-147, November 1998, Dallas, Texas, United States
Toshio Suganuma , Toshiaki Yasue , Motohiro Kawahito , Hideaki Komatsu , Toshio Nakatani, A dynamic optimization framework for a Java just-in-time compiler, ACM SIGPLAN Notices, v.36 n.11, p.180-195, 11/01/2001
Toshio Suganuma , Toshiaki Yasue , Motohiro Kawahito , Hideaki Komatsu , Toshio Nakatani, Design and evaluation of dynamic optimizations for a Java just-in-time compiler, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.4, p.732-785, July 2005
Matthew Arnold , Barbara G. Ryder, A framework for reducing the cost of instrumented code, ACM SIGPLAN Notices, v.36 n.5, p.168-179, May 2001
Matthew Arnold , Stephen Fink , David Grove , Michael Hind , Peter F. Sweeney, Adaptive optimization in the Jalapeo JVM, ACM SIGPLAN Notices, v.35 n.10, p.47-65, Oct. 2000
Chao-ying Fu , Jill T. Bodine , Thomas M. Conte, Modeling Value Speculation: An Optimal Edge Selection Problem, IEEE Transactions on Computers, v.52 n.3, p.277-292, March
Brian Grant , Matthai Philipose , Markus Mock , Craig Chambers , Susan J. Eggers, A retrospective on: "an evaluation of staged run-time optimizations in DyC", ACM SIGPLAN Notices, v.39 n.4, April 2004 | profiling;invariance;compiler optimization |
266826 | Can program profiling support value prediction?. | This paper explores the possibility of using program profiling to enhance the efficiency of value prediction. Value prediction attempts to eliminate true-data dependencies by predicting the outcome values of instructions at run-time and executing true-data dependent instructions based on that prediction. So far, all published papers in this area have examined hardware-only value prediction mechanisms. In order to enhance the efficiency of value prediction, it is proposed to employ program profiling to collect information that describes the tendency of instructions in a program to be value-predictable. The compiler that acts as a mediator can pass this information to the value-prediction hardware mechanisms. Such information can be exploited by the hardware in order to reduce mispredictions, better utilize the prediction table resources, distinguish between different value predictability patterns and still benefit from the advantages of value prediction to increase instruction-level parallelism. We show that our new method outperforms the hardware-only mechanisms in most of the examined benchmarks. | Introduction
Modern microprocessor architectures are increasingly
designed to employ multiple execution units that are
capable of executing several instructions (retrieved from a
sequential instruction stream) in parallel. The efficiency of
such architectures is highly dependent on the
instruction-level parallelism (ILP) that they can extract
from a program. The extractable ILP depends on both
processor's hardware mechanisms as well as the program's
characteristics ([6], [7]). Program's characteristics affect
the ILP in the sense that instructions cannot always be
eligible for parallel execution due to several constraints.
These constraints have been classified into three classes:
true-data dependencies, name dependencies (false
dependencies) and control dependencies ([6], [7], [15]).
Both control dependencies and name dependencies are not
considered an upper bound on the extractable ILP since
they can be handled or even eliminated in several cases by
various hardware and software techniques ([1], [2], [3], [6],
As opposed to name
dependencies and control dependencies, only true-data
dependencies were considered to be a fundamental limit on
the extractable ILP since they reflect the serial nature of a
program by dictating in which sequence data should be
passed between instructions. This kind of extractable
parallelism is represented by the dataflow graph of the
program ([7]).
Recent works ([9], [10], [4], [5]) have proposed a
novel hardware-based paradigm that allows superscalar
processors to exceed the limits of true-data dependencies.
This paradigm, termed value prediction, attempted to
collapse true-data dependencies by predicting at run-time
the outcome values of instructions and executing the
true-data dependent instructions based on that prediction.
Within this concept, it has been shown that the limits of
true-data dependencies can be exceeded without violating
the sequential program correctness. This claim breaks two
accepted fundamental principles: 1. the ILP of a sequential
program is limited by its dataflow graph representation,
and 2. in order to guarantee the correct execution of the
program, true-data dependent instructions cannot be
executed in parallel. It was also indicated that value
prediction can cause the execution of instructions to
become speculative. Unlike branch prediction that can also
cause instructions to be executed speculatively since they
are control dependent, value prediction may cause
instructions to become speculative since it is not assured
that they were fed with the correct input values.
All recent works in the area of value prediction
considered hardware-only mechanisms. In this paper we
provide new opportunities enabling the compiler to support
value prediction by using program profiling. Profiling
techniques are being widely used in different compilation
areas to enhance the optimization of programs. In general,
the idea of profiling is to study the behavior of the program
based on its previous runs. In each of the past runs, the
program can be executed based on different sets of input
parameters and input files (training inputs). During these
runs, the required information (profile image) can be
collected. Once this information is available it can be used
by the compiler to optimize the program's code more
efficiently. The efficiency of program profiling is mainly
based on the assumption that the characteristics of the
program remain the same under different runs as well.
In this paper we address several new open questions
regarding the potential of profiling and the compiler to
support value prediction. Note that we do not attempt to
replace all the value prediction hardware mechanisms in
the compiler or the profiler. We aim at revising certain
parts of the value prediction mechanisms to exploit
information that is collected by the profiler. In the profile
phase, we suggest collecting information about the
instructions' tendency to be value-predictable (value
predictability) and classify them accordingly (e.g., we can
detect the highly predictable instructions and the
unpredictable ones). Classifying instructions according to
their value predictability patterns may allow us to avoid the
unpredictable instructions from being candidates for value
prediction. In general, this capability introduces several
significant advantages. First, it allows us to better utilize
the prediction table by enabling the allocation of highly
predictable instructions only. In addition, in certain
microprocessors, mispredicted values may cause some
extra misprediction penalty due to their pipeline
organization. Therefore the classification allows the
processor to reduce the number of mispredictions and saves
the extra penalty. Finally, the classification increases the
effective prediction accuracy of the predictor.
previous works have performed the
classification by employing a special hardware mechanism
that studies the tendency of instructions to be predictable at
run-time ([9], [10], [4], [5]). Such a mechanism is capable
of eliminating a significant part of the mispredictions.
However, since the classification was performed at
run-time, it could not allocate in advance the predictable
instructions in the prediction table. As a result
unpredictable instructions could have uselessly occupied
entries in the prediction table and evacuated the predictable
instructions. In this work we propose an alternative
technique to perform the classification. We show that
profiling can provide the compiler with accurate
information about the tendency of instructions to be
value-predictable. The role of the compiler in this case is to
act as a mediator and to pass the profiling information to
the value prediction hardware mechanisms through special
opcode directives. We show that such a classification
methodology outperforms the hardware-based
classification in most of the examined benchmarks. In
particular, the performance improvement is most
observable when the pressure on the prediction table, in
term of potential instructions to be allocated, is high.
Moreover, we indicate that the new classification method
introduces better utilization of the prediction table
resources and avoidance of value mispredictions.
The rest of this paper is organized as follows: Section
summarizes previous works and results in the area of
value prediction. Section 3 presents the motivation and the
methodology of this work. Section 4 explores the potential
of program profiling through various quantitative
measurements. Section 5 examines the performance gain of
the new technique. Section 6 concludes this paper.
2. Previous works and results
This section summarizes some of the experimental
results and the hardware mechanisms of the previous
familiar works in the area of value prediction ([9], [10],
[4], [5]). These results and their significance have been
broadly studied by these works, however we have chosen
to summarize them since they provide substantial
motivation to our current work.
Subsections 2.1 and 2.2 are dedicated to value
prediction mechanisms: value predictors and classification
mechanisms. Subsection 2.3, 2.4 and 2.5 describe the
statistical characteristics of the phenomena related to value
prediction. The relevance of these characteristics to this
work is presented in Section 3.
2. 1.Value predictors
Previous works have introduced two different
hardware-based value predictors: the last-value predictor
and the stride predictor. For simplicity, it was assumed that
the predictors only predict destination values of register
operands, even though these schemes could be generalized
and applied to memory storage operands, special registers,
the program counter and condition codes.
Last-value predictor: ([9], [10]) predicts the
destination value of an individual instruction based on the
last previously seen value it has generated (or computed).
The predictor is organized as a table (e.g., cache table - see
figure 2.1), and every entry is uniquely associated with an
individual instruction. Each entry contains two fields: tag
and last-value. The tag field holds the address of the
instruction or part of it (high-order bits in case of an
associative cache table), and the last-value field holds the
previously seen destination value of the corresponding
instruction. In order to obtain the predicted destination
value of a given instruction, the table is searched by the
absolute address of the instruction.
Stride predictor: ([4], [5]) predicts the destination
value of an individual instruction based on its last
previously seen value and a calculated stride. The predicted
value is the sum of the last value and the stride. Each entry
in this predictor holds an additional field, termed stride
field that stores the previously seen stride of an individual
instruction (figure 2.1). The stride field value is always
determined upon the subtraction of two recent consecutive
destination values.
Tag Last value
Predicted
value
hit/miss
Instruction
address
Last-value predictor
Tag
Last
value Stride
hit/miss
Instruction
address
Predicted
value
predictor
Figure
2.1 - The "last value" and the "stride"
predictors.
2. 2.Classification of value predictability
Classification of value predictability aims at
distinguishing between instructions which are likely to be
correctly predicted and those which tend to be incorrectly
predicted by the predictor. A possible method of
classifying instructions is using a set of saturated counters
([9], [10]). An individual saturated counter is assigned to
each entry in the prediction table. At each occurrence of a
successful or unsuccessful prediction the corresponding
counter is incremented or decremented respectively.
According to the present state of the saturated counter, the
processor can decide whether to take the suggested
prediction or to avoid it. In Section 5 we compare the
effectiveness of this hardware-based classification
mechanism versus the proposed mechanism.
2. 3.Value prediction accuracy
The benefit of using value prediction is significantly
dependent on the accuracy that the value predictor can
accomplish. The previous works in this field ([4], [5], [9]
and [10]) provided substantial evidence to support the
observation that outcome values in programs tend to be
predictable (value predictability). The prediction accuracy
measurements of the predictors that were described in
Subsection 2.1 on Spec-95 benchmarks are summarized in
table 2.1. Note that in the floating point benchmarks
(Spec-fp95) the prediction accuracy was measured in each
benchmark for two execution phases: initialization (when
the program reads its input data) and computation (when
the actual computation is made). Broad study and analysis
of these measurements can be found in [4] and [5].
Prediction accuracy [%]
Integer
loads
ALU
instructions
FP loads FP
computation
instructions
Init. phase
Comp.
phase
28
Notations
predictor L Last-value predictor
Table
2.1 - Value prediction accuracy
measurements.
2. 4.Distribution of value prediction accuracy
Our previous studies ([4], [5]) revealed that the
tendency of instruction to be value-predictable does not
spread uniformly among the instructions in a program (we
only refer to those instructions that assign outcome value to
a destination register). Approximately 30% of the
instructions are very likely to be correctly predicted with
prediction accuracy greater than 90%. In addition,
approximately 40% of the instructions are very unlikely to
be correctly predicted (with a prediction accuracy less than
10%). This observation is illustrated by figure 2.2 # for both
integer and floating point benchmarks. The importance of
this observation and its implication are discussed in
Subsection 3.1.
2. 5.Distribution of non-zero strides
In our previous works ([4], [5]) we examined how
efficiently the stride predictor takes advantage of the
additional "stride" field (in its prediction table) beyond the
last-value predictor that only maintains a single field (per
entry) of the "last value". We considered the stride fields to
be utilized efficiently only when the predictor
accomplishes a correct value prediction and the stride field
is not equal to zero (non-zero stride). In order to grade this
efficiency we used a measure that we term stride efficiency
ratio (measured in percentages). The stride efficiency ratio
is the ratio of successful non-zero stride-based value
predictions to overall successful predictions.
# 1. The initialization phase of the floating-point benchmarks is denoted
by #1 and the computation phase by #2.
2. gcc1 and gcc2 denotes the measurements when the benchmark was
run with different input files (the same for perl1 and perl2).
JR P#NVL
JFF# JFF# FRPSUH
OL LMSHJ SHUO# SHUO# YRUWH[
7KH#GLVWULEXWLRQ#RI#SUHGLFWLRQ#DFFXUDF\#
Y#
Y#
VX#FRU
VX#FRU
K\GUR#
G#
K\GUR#
G#
7KH#GLVWULEXWLRQ#RI#SUHGLFWLRQ#DFFXUDF\#
Figure
2.2 - The spread of instructions according to their value prediction accuracy.
Our measurements indicated that in the integer benchmarks
the stride efficiency ratio is approximately 16%, and in the
floating point benchmarks it varies from 12% in the
initialization phase to 43% in the computation phase. We
also examined the stride efficiency ratio of each instruction
in the program that was allocated to the prediction table.
We observed that most of these instructions could be
divided into two major subsets: a small subset of
instructions which always exhibits a relatively high stride
efficiency ratio and a large subset of instructions which
always tend to reuse their last value (with a very low stride
efficiency ratio). Figure 2.3 draws histograms of our
experiments and illustrates how instructions in the program
are scattered according to their stride efficiency ratio.
0%
20%
40%
80%
100%
sim
126.gcc 129.comp
ress
130.li 132.ijpeg 134.perl 147.vorte
x
efficiency
ratio
% of
instructions
Figure
2.3 - The spread of instructions according
to their stride efficiency ratio.
3. The proposed methodology
Profiling techniques are broadly being employed in
various compilation areas to enhance the optimization of
programs. The principle of this technique is to study the
behavior of the program based on one set of train inputs
and to provide the gathered information to the compiler.
The effectiveness of this technique relies on the assumption
that the behavioral characteristics of the program remain
consistent with other program's runs as well. In the first
subsection we present how the previous knowledge in the
area of value prediction motivated us towards our new
approach. In the second subsection we present our
methodology and its main principles.
3. 1.Motivation
The consequences of the previous results described in
Section 2 are very significant, since they establish the basis
and motivation for our current work with respect to the
following aspects:
1. The measurements described in Subsection 2.3
indicated that a considerable portion of the values that are
computed by programs tends to be predictable (either by
stride or last-value predictors). It was shown in the
previous works that exploiting this property allows the
processor to exceed the dataflow graph limits and improve
ILP.
2. Our measurements in Subsection 2.4 indicated that the
tendency of instructions to be value predictable does not
spread uniformly among the instructions in the program. In
most programs exhibit two sets of instructions, highly
value-predictable instructions and highly unpredictable
ones. This observation established the basis for emlpoying
classification mechanisms.
3. Previous experiments ([4], [5]) have also provided
preliminary indication that different input files do not
dramatically affect the prediction accuracy of several
examined benchmarks. If this observation is found to be
common enough, then it may have a tremendous
significance when considering the involvement of program
profiling. It may imply that the profiling information which
is collected in previous runs of the program (running the
application with training input files) can be correlated to
the true situation where the program runs with its real input
files (provided by the user). This property is extensively
examined in this paper.
4. We have also indicated that the set of value-predictable
instructions in the program is partitioned into two subsets:
a small subset of instructions that exhibit stride value
predictability (predictable only by the stride predictor) and
a large subset of instructions which tend to reuse their last
value (predictable by both predictors). Our previous works
([4], [5]) showed that although the first subset is relatively
smaller than the second subset, it appears frequently
enough to significantly affect the extractable ILP. On one
hand, if we only use the last-value predictor then it cannot
exploit the predictability of the first subset of instructions.
On the other hand, if we only use the stride predictor, then
in a significant number of entries in the prediction table,
the extra stride field is useless because it is assigned to
instructions that tend to reuse their most recently produced
value (zero strides). This observation motivates us to
employ a hybrid predictor that combines both the stride
prediction table and the last-value prediction table. For
instance we may consider a relatively small stride
prediction table only for the instructions that exhibit stride
patterns and a larger table for the instructions that tend to
reproduce their last value. The combination of these
schemes may allow us utilize the extra stride field more
efficiently.
3. 2. A classification based on program profiling
and compiler support
The methodology that we are introducing in this work
combines both program profiling and compiler support to
perform the classification of instructions according to their
tendency to be value predictable. All familiar previous
works performed the classification by using a hardware
mechanism that studies the tendency of instructions to be
predictable at run-time ([4], [5], [9], [10]). Such a
mechanism was capable of eliminating a significant part of
the mispredictions. However, since the classification was
performed dynamically, it could not allocate in advance the
highly value predictable instructions in the prediction table.
As a result unpredictable instructions could have uselessly
occupied entries in the prediction table and evacuated
useful instructions. The alternative classification technique,
proposed in this paper, has two tasks: 1. identify the highly
predictable instructions and 2. indicate whether an
instruction is likely to repeat its last value or whether it
exhibits stride patterns.
Our methodology consists of three basic phases
(figure 3.1). In the first phase the program is ordinarily
compiled (the compiler can use all the available and known
optimization methods) and the code is generated. In the
second phase the profile image of the program is collected.
The profile image describes the prediction accuracy of
each instruction in the program (we only refer to
instructions which write a computed value to a destination
register). In order to collect this information, the program
can be run on a simulation environment (e.g., the SHADE
simulator - see [12]) where the simulator can emulate the
operation of the value predictor and measure for each
instruction its prediction accuracy. If the simulation
emulates the operation of the stride predictor it can also
measure the stride efficiency ratio of each instruction. Such
profiling information could not only indicate which
instructions tend to be value-predictable or not, but also
which ones exhibit value predictability patterns in form of
"strides" or "last-value". The output of the profile phase
can be a file that is organized as a table. Each entry is
associated with an individual instruction and consists of
three fields: the instruction's address, its prediction
accuracy and its stride efficiency ratio. Note that in the
profile phase the program can be run either single or
multiple times, where in each run the program is driven by
different input parameters and files.
Compiler
Program
(C or FORTRAN)
Binary
executable
Simulator
Train input
parameters and files
Profile
image file
Phase #1 Phase #2
Compiler
New binary
executable
with opcode
directives
Phase #3
threshold
value (user)
Figure
3.1 - The three phases of the proposed
classification methodology.
In the final phase the compiler only inserts directives in the
opcode of instructions. It does not perform instruction
scheduling or any form of code movement with respect to
the code that was generated in the first phase. The inserted
directives act as hints about the value predictability of
instructions that are supplied to the hardware. Note, that we
consider such use of opcode directives as feasible, since
recent processors, such as the PowerPC 601, made branch
predictions based on opcode directives too ([11]). Our
compiler employs two kinds of directives: the "stride" and
the "last-value". The "stride" directive indicates that the
instruction tends to exhibit stride patterns, and the
"last-value" directive indicates that the instruction is likely
to repeat its recently generated outcome value. By default,
if none of these directives are inserted in the opcode, the
instruction is not recommended to be value predicted. The
compiler can determine which instructions are inserted with
the special directives according to the profile image file
and a threshold value supplied by the user. This value
determines the prediction accuracy threshold of
instructions to be tagged with a directive as
value-predictable. For instance, if the user sets the
threshold value to 90%, all the instructions in the profile
image file that had a prediction accuracy less than 90% are
not inserted with directives (marked as unlikely to be
correctly predicted) and all those with prediction accuracy
greater than or equal to 90% are marked as predictable.
When an instruction is marked as value-predictable, the
type of the directive (either "stride" or "last-value") still
needs to be determined. This can be done by examining the
stride efficiency ratio that is provided in the profile image
file. A possible heuristic that the compiler can employ is: If
the stride efficiency ratio is greater than 50% it indicates
that the majority of the correct predictions were non-zero
strides and therefore the instruction should be marked as
"stride"; otherwise it is tagged with the "last-value"
directive. Another way to determine the directive type is to
ask the user to supply the threshold value for the stride
efficiency ratio.
Once this process is completed, the previous
hardware-based classification mechanism (the set of
saturated counters) becomes unnecessary. Moreover, we
can use a hybrid value predictor that consists of two
prediction tables: the "last-value" and the "stride"
prediction tables (Subsection 2.2). A candidate instruction
for value prediction can be allocated to one of these tables
according to its opcode directive type. These new
capabilities allow us to exploit both value predictability
patterns (stride and last-value) and utilize the prediction
tables more efficiently. In addition, they allow us to detect
in advance the highly predictable instructions, and thus we
could reduce the probability that unlikely to be correctly
predicted instructions evacuate useful instructions from the
prediction table.
In order to clarify the principles of our new technique
we are assisted by the following sample C program
segment:
The program sums the values of two vectors, B and C, into
vector A.
In the first phase, the compilation of the program with
the gcc 2.7.2 compiler (using the "-O2" optimization)
yields the following assembly code (for a Sun-Sparc
machine on SunOS 4.1.3):
(1) OG#L#J#O#/RDG#%>L@
(2) OG#L#J#L#/RDG#&>M@
In the second phase we collect the profile image of the
program. A sample output file of this process is illustrated
by table 3.1. It can be seen that this table includes all the
instructions in the program that assign values to a
destination register (load and add instructions). For
simplicity, we only refer to value prediction where the
destination operand is a register. However our
methodology is not limited by any means to being applied
when the destination operand is a condition code, a
program counter, a memory storage location or a special
register.
Instruction
address
Prediction
accuracy
efficiency
ratio
3 99.99% 99.99%
7 99.99% 99.99%
9 99.99% 99.99%
Table
3.1 - A sample profile image output.
In this example the profile image indicates that the
prediction accuracy of the instructions that compute the
index of the loop was 99.99% and their efficiency ratio was
99.99%. Such an observation is reasonable since the
destination value of these instructions can be correctly
predicted by the stride predictor. The other instructions in
our example accomplished relatively low prediction
accuracy and stride efficiency ratio. If the user determines
the prediction accuracy threshold to be 90%, then in the
third phase the compiler would modify the opcodes of the
add operations in addresses 3, 7, and 9 and insert into these
opcodes the "stride" directive. All other instructions in the
program are unaffected.
4. Examining the potential of profiling
through quantitative measurements
This section is dedicated to examining the basic
question: can program profiling supply the value prediction
hardware mechanisms with accurate information about the
tendency of instructions to be value-predictable? In order
to answer this question, we need to explore whether
programs exhibit similar patterns when they are being run
with different input parameters. If under different runs of
the programs these patterns are correlated, this confirms
our claim that profiling can supply accurate information.
For our experiments we use different programs,
chosen from the Spec95 benchmarks (table 4.1), with
different input parameters and input files. In order to
collect the profile image we traced the execution of the
programs by the SHADE simulator ([12]) on Sun-Sparc
processor. In the first phase, all benchmarks were compiled
with the gcc 2.7.2 compiler with all available
optimizations.
Benchmarks
Benchmarks Description
go Game playing.
A simulator for the 88100 processor.
gcc A C compiler based on GNU C 2.5.3.
compress95 Data compression program using
adaptive Lempel-Ziv coding.
li Lisp interpreter.
ijpeg JPEG encoder.
perl Anagram search program.
vortex A single-user object-oriented database
transaction benchmark.
mgrid Multi-grid solver in computing a three
dimensional potential field.
Table
4.1 - Spec95 benchmarks.
For each run of a program we create a profile image
containing statistical information that was collected during
run-time. The profile image of each run can be regarded as
a vector V , where each of its coordinates represents the
value prediction accuracy of an individual instruction (the
dimension of the vector is determined by the number of
different instructions that were traced during the
experiment). As a result of running the same program n
times, each time with different input parameters and input
files, we obtain a set of n vectors
{ , ,., }
the vector ( )
represents the
profile image of run j. Note that in each run we may collect
statistical information of instructions which may not appear
in other runs. Therefore, we only consider the instructions
that appear in all the different runs of the program.
Instructions which only appear in certain runs are omitted
from the vectors (our measurements indicate that the
number of these instructions is relatively small). By
omitting these instructions we can organize the components
of each vector such that corresponding coordinates would
refer to the prediction accuracy of same instruction, i.e., the
set of coordinates { , ,., }
1,l 2,l n,l refers to the prediction
accuracy of the same instruction l under the different runs
of the program.
Our first goal is to evaluate the correlation between
the tendencies of instructions to be value-predictable under
different runs of a program with different input files and
parameters. Therefore, once the set of vectors
{ , , ., }
is collected, we need to define a certain
metric for measuring the similarity (or the correlation)
between them. We choose to use two metrics to measure
the resemblance between the vectors. We term the first
metric the maximum-distance metric. This metric is a
vector
coordinates are calculated as illustrated by equation 4.1:
max{| | , | | , , | | ,
| | , | | , | | ,
| |}
Equation 4.1 - The Mmax metric.
Each coordinate of M(V) max is equal to the maximum
distance between the corresponding coordinates of each
pair of vectors from the set
{ , , ., }
. The second
metric that we use is less strict. We term this metric the
average-distance metric. This metric is also a vector,
average k
where each of its
coordinates is equal to the arithmetic-average distance
between the corresponding coordinates of each pair of
vectors from the set
{ , , ., }
(equation 4.2).
average | | , | | , , | | ,
| | , | | , | | ,
| |}
{ 3#
Equation 4.2 - The M average metric.
Obviously, one can use other metrics in order to measure
the similarity between the vectors, e.g., instead of taking
the arithmetic average we could take the geometric
average. However, we think that these metrics sufficiently
satisfy our needs.
Once our metrics are calculated out of the profile
image, we can illustrate the distribution of its coordinates
by building a histogram. For instance, we can count the
number of M(V) max coordinates whose values are in each of
the intervals: [0,10], (10,20], (30,40], .,(90,100]. If we
observe that most of the coordinates are scattered in the
lower intervals, we can conclude that our measurements are
similar and that the correlation between the vectors is very
high.
Figures
4.1 and 4.2 illustrate such histograms for our
two metrics M(V) max and M(V) average respectively.
0%
20%
40%
80%
100%
126.gcc 129.com
press
130.li 132.ijpeg 134.perl 147.vorte
x
The spread of the coordinates of M(V)max
Figure
4.1 - The spread of M(V) max .
126.gcc 129.com
press
130.li 132.ijpeg 134.perl 147.vorte
x
The spread of the coordinates of M(V) average
Figure
4.2 - The spread of M(V) average .
In these histograms we clearly observe that in all the
benchmarks most of the coordinates are spread across the
lower intervals. This observation provides the first
substantial evidence that confirms one of our main claims -
the tendency of instructions in a program to be value
predictable is independent of the program's input
parameters and data. In addition it confirms our claim that
program profiling can supply accurate information about
the tendency of instructions to be value predictable.
As we have previously indicated, the profile image of
the program that is provided to the compiler can be better
tuned so that it can indicate which instructions tend to
repeat their recently generated value and which tend to
exhibit patterns of strides. In order to evaluate the potential
of such classification we need to explore whether the set of
instructions whose outcome values exhibit tendency of
strides is common to the different runs of the program. This
can be done by examining the stride efficiency ratio of each
instruction in the program from the profile image file. In
this case, we obtain from the profile image file a vector S ,
where each of its coordinates represents the stride
efficiency ratio of an individual instruction. When we run
the same program n times (each time with different input
parameters and input files) we obtain a set of n vectors
{ , ,., }
where the vector
represents the profile image
of run j. Once these vectors are collected we can use one of
the previous metrics either the maximum-distance or the
average-distance in order to measure the resemblance
between the set of vectors S S S S n
{ , ,., }
. For
simplicity we have chosen this time only the
average-distance metric to demonstrate the resemblance
between the vectors. Once this metric is calculated out of
the profile information, we obtain a vector M(S) average .
Similar to our previous analysis, we draw a histogram to
illustrate the distribution of the coordinates of M(S) average
(figure 4.3).
0%
20%
40%
80%
100%
099.go 124.m88ks
im
126.gcc 129.compr
ess
130.li 132.ijpeg 134.perl 147.vortex107.mgrid9070503010
The spread of the coordinate of M(S) average
Figure
4.3 - The spread of M(S) average .
Again in this histogram we clearly observe that in all
the benchmarks most of the coordinates are spread across
the lower intervals. This observation provides evidence that
confirms our claim that the set of instructions in the
program that tend to exhibit value predictability patterns in
form of stride is independent of the program's input
parameters and data. Therefore profiling can accurately
detect these instructions and provide this information to the
compiler.
5. The effect of the profiling-based
classification on value-prediction performance
In this section we focus on three main aspects: 1. the
classification accuracy of our mechanism, 2. its potential to
better utilize the prediction table entries and 3. its effect on
the extractable ILP when using value prediction. We also
compare our new technique versus the hardware only
classification mechanism (saturated counters).
5. 1.The classification accuracy
The quality of the classification process can be
represented by the classification accuracy, i.e., the fraction
of correct classifications out of overall prediction attempts.
We measured the classification accuracy of our new
mechanism and compared it to the hardware-based
mechanism. The classification accuracy was measured for
the incorrect and correct predictions separately (using the
"stride" predictor), as illustrated by figures 5.1 and 5.2
respectively. Note that these two cases represent a
fundamental trade-off in the classification operation since
improving the classification accuracy of the incorrect
predictions can reduce the classification accuracy of the
correct predictions and vice versa.
Our measurements currently isolate the effect of the
prediction table size since in this subsection we wish to
focus only on the pure potential of the proposed technique
to successfully classify either correct or incorrect value
predictions. Hence, we assume that each of the
classification mechanisms has an infinite prediction table
(a stride predictor), and that the hardware-based
classification mechanism also maintains an infinite set of
saturated counters. The effect of the finite prediction table
is presented in the next subsection.2060100
go m88ksim gcc compress li ijpeg perl vortex mgrid average
FSM Prof th=90% Prof th=80% Prof th=70% Prof th=60% Prof th=50%
The precentages of the mispredictions which are classified correctly
Figure
5.1 - The percentages of the
mispredictions which are classified correctly.2060100
go m88ksim gcc compress li ijpeg perl vortex mgrid average
FSM Prof th=90% Prof th=80% Prof th=70% Prof th=60% Prof th=50%
Thepercentages of the correct predictions which are classified correctly
Figure
5.2 - The percentages of the correct
predictions which are classified correctly.
Our observations indicate that in most cases the
profiling-based classification better eliminates
mispredictions in comparison with the saturated counters.
When the threshold value of our classification mechanism
is reduced, the classification accuracy of mispredictions
decreases as well, since the classification becomes less
strict. Only when the threshold value is less than 60% does
the hardware-based classification gain better classification
accuracy for the mispredictions than our proposed
mechanism (on the average).
Decreasing the threshold value of our classification
mechanisms improves the detection of the correct
predictions at the expense of the detection of
mispredictions. Figure 5.2 indicates that in most cases the
hardware-based classification achieves slightly better
classification accuracy of correct predictions in comparison
with the profiling-based classification. Notice that this
observation does not imply at all that the hardware-based
classification outperforms the profiling-based
classification, because the effect of the table size was not
included in these measurements.
5. 2.The effect on the prediction table utilization
We have already indicated that when using the
hardware-based classification mechanism, unpredictable
instructions may uselessly occupy entries in the prediction
table and can purge out highly predictable instructions. As
a result, the efficiency of the predictor can be decreased, as
well as the utilization of the table and the prediction
accuracy. Our classification mechanism can overcome this
drawback, since it is capable of detecting the highly
predictable instructions in advance, and hence decreasing
the pollution of the table caused by unpredictable
instructions.
In table 5.1 we show the fraction (in percentages) of
potential candidates which are allowed to be allocated in
the table by our classification mechanism out of those
allocated by the saturated counters. It can be observed that
even with a threshold value of 50%, the new mechanism
can reduce the number of potential candidates by nearly
50%. Moreover, this number can be reduced even more
significantly when the threshold is tightened, e.g., a
threshold value of 90% reduces the number of potential
candidates by more than 75%. This unique capability of
our mechanism allows us to use a smaller prediction table
and utilize it more efficiently.
Profiling threshold 90% 80% 70% 60% 50%
The fraction of potential
candidates to be allocated
relative to those in the
saturated counters
Table
5.1 - The fraction of potential candidates to
be allocated relative to those in the
hardware-based classification.
In order to evaluate the performance gain of our
classification method in comparison with the
hardware-based classification mechanism, we measured
both the total number of correct predictions and the total
number of mispredictions when the table size is finite. The
predictor, used in our experiments, is the "stride predictor",
which was organized as a 512-entry, 2-way set associative
table. In addition, in the case of the profiling-based
classification, instructions were allowed to be allocated to
the prediction table only when they were tagged with either
the "last-value" or the "stride" directives. Our results,
summarized in figures 5.3 and 5.4, illustrate the increase in
the number of correct predictions and incorrect predictions
respectively gained by the new mechanism (relative to the
saturated counters). It can be observed that the profiling
threshold plays the main role in the tuning of our new
mechanism. By choosing the right threshold, we can tune
our mechanism in such way that it outperforms the
hardware-based classification mechanism in most
benchmarks. In the benchmarks go, gcc, li, perl and vortex,
we can accomplish both a significant increase in the
number of correct predictions and a reduction in the
number of mispredictions. For instance, when using a
threshold value in the range of 80-90% in vortex, our
mechanism accomplishes both more correct predictions
and less incorrect predictions than the hardware-only
mechanism. Similar achievements are also obtained in go
when the range of threshold values is 60-90%, in gcc when
the range is 70-90%, in li when the threshold value is 60%
and in perl when the range is 70-90%. In the other
benchmarks (m88ksim, compress, ijpeg and mgrid) we
cannot find a threshold value that yields both an increase in
the total number of correct predictions and a decrease in
the number of mispredictions. The explanation of this
observation is that these benchmarks employ relatively
much smaller working-sets of instructions and therefore
they can much less exploit the benefits of our classification
mechanism. Also notice that the mispredictions increase,
observed for our classification mechanism in m88ksim, is
not expected to significantly affect the extractable ILP,
since the prediction accuracy of this benchmark is already
very high.
JR P#NVL
JFF FRPSUH
OL LMSHJ SHUO YRUWH[ PJULG
Figure
5.3 - The increase in the total number of
correct predictions.
JR P#NVL
JFF FRPSU
OL LMSHJ SHUO YRUWH[ PJULG
Figure
5.4 - The increase in the total number of
incorrect predictions.
5. 3.The effect of the classification on the
extractable ILP
In this subsection we examine the ILP that can be
extracted by value prediction under different classification
mechanisms. Our experiments consider an abstract machine
with a finite instruction window of 40 entries, unlimited
number of execution units and a perfect branch prediction
mechanism. In addition, the type of value predictor that we
use and its table organization are the same as in the
previous subsection. In case of value-misprediction, the
penalty in our abstract machine is 1 clock cycle. Notice that
such a machine model can explore the pure potential of the
examined mechanisms without being constrained by
individual machine limitations.
Our experimental results, summarized in table 5.2,
present the increase in ILP gained by using value
prediction under different classification mechanisms
(relative to the case when value prediction is not used). In
most benchmarks we observe that our mechanism can be
tuned, by choosing the right threshold value, such that it
can achieve better results than those gained by the saturated
counters. In addition, we also observe that when decreasing
the threshold value from 90% to 50% the ILP gained by
our new mechanism increases (in most cases). The
explanation of this phenomenon is that in our range of
threshold values, the contribution of increasing the correct
predictions (as a result of decreasing the threshold) is more
significant than the effect of increasing mispredictions.
ILP increase
Prof.
90%
Prof.
80%
Prof.
70%
Prof.
Prof.
50%
go 10% 9% 10% 13% 13% 13%
gcc 15% 16% 17% 21% 21% 21%
compress 11% 7% 7% 8% 8% 8%
li 37% 33% 35% 38% 38% 40%
ijpeg 16% 14% 14% 15% 16% 15%
perl 19% 23% 24% 28% 28% 27%
vortex 159% 175% 178% 180% 179% 179%
mgrid 24% 7% 10% 11% 11% 11%
Notations
prediction using saturated counters.
X%
Value prediction using the profiling-based
classification and a threshold value
Table
5.2 - The increase in ILP under different
classification mechanisms relative to the case
when value prediction is not used.
6. Conclusions
This paper introduced a profiling-based technique to
enhance the efficiency of value prediction mechanisms.
The new approach suggests using program profiling in
order to classify instructions according to their tendency to
be value-predictable. The collected information by the
profiler is supplied to the value prediction mechanisms
through special directives inserted into the opcode of
instructions. We have shown that the profiling information
which is extracted from previous runs of a program with
one set of input parameters is highly correlated with the
future runs under other sets of inputs. This observation is
very important, since it reveals various opportunities to
involve the compiler in the prediction process and thus to
increase the accuracy and the efficiency of the value
predictor.
Our experiments also indicated that the profiling
information can distinguish between different value
predictability patterns (such as "last-value" or "stride"). As
a result, we can use a hybrid value predictor that consists of
two prediction tables: the last-value and the stride
prediction tables. A candidate instruction for value
prediction can be allocated to one of these tables according
to its profiling classification. This capability allows us to
exploit both value predictability patterns (stride and
last-value) and utilize the prediction tables more
efficiently.
Our performance analysis showed that the
profiling-based mechanism could be tuned by choosing the
right threshold value so that it outperformed the
hardware-only mechanism in most benchmarks. In many
benchmarks we could accomplish both a significant
increase in the number of correct predictions and a
reduction in the number of mispredictions.
The innovation in this paper is very important for
future integration of the compiler with value prediction.
We are currently working on other properties of the
program that can be identified by the profiler to enhance
the performance and the effectiveness of value prediction.
We are examining the effect of the profiling information on
the scheduling of instruction within a basic block and the
analysis of the critical path. In addition, we also explore the
effect of different programming styles such as object
oriented on the value predictability patters.
--R
Some Experiments in Local Microcode Compaction for Horizontal Machines.
A Compiler for VLIW Architecture.
The Optimization of Horizontal Microcode Within and Beyond Basic Blocks: An Application of Processor Scheduling with Resources.
Speculative Execution based on Value Prediction.
An Experimental and Analytical Study of Speculative Execution based on Value Prediction.
Computer Architecture a Quantitative Approach.
Superscalar Microprocessor Design.
Software Pipelining: An Effective Scheduling Technique for VLIW Processors.
Value Locality and Load Value Prediction.
Exceeding the Dataflow Limit via Value Prediction.
Branch Prediction Strategies and Branch-Target Buffer Design
A Study of Branch Prediction Techniques.
Limits of Instruction-Level Parallelism
A Study of Scalar Compilation Techniques for Pipelined Supercomputers.
Alternative Implementations of Two-Level Adaptive Branch Prediction
--TR
Bulldog: a compiler for VLSI architectures
A study of scalar compilation techniques for pipelined supercomputers
Software pipelining: an effective scheduling technique for VLIW machines
Limits of instruction-level parallelism
Alternative implementations of two-level adaptive branch prediction
Value locality and load value prediction
Exceeding the dataflow limit via value prediction
Computer architecture (2nd ed.)
A study of branch prediction strategies
The optimization of horizontal microcode within and beyond basic blocks
--CTR
Peng Chen , Krishna Kavi , Robert Akl, Performance Enhancement by Eliminating Redundant Function Execution, Proceedings of the 39th annual Symposium on Simulation, p.143-151, April 02-06, 2006
Youtao Zhang , Jun Yang , Rajiv Gupta, Frequent value locality and value-centric data cache design, ACM SIGOPS Operating Systems Review, v.34 n.5, p.150-159, Dec. 2000
Youtao Zhang , Jun Yang , Rajiv Gupta, Frequent value locality and value-centric data cache design, ACM SIGPLAN Notices, v.35 n.11, p.150-159, Nov. 2000
Chao-ying Fu , Jill T. Bodine , Thomas M. Conte, Modeling Value Speculation: An Optimal Edge Selection Problem, IEEE Transactions on Computers, v.52 n.3, p.277-292, March
Jun Yang , Rajiv Gupta, Frequent value locality and its applications, ACM Transactions on Embedded Computing Systems (TECS), v.1 n.1, p.79-105, November 2002
Jos Gonzlez , Antonio Gonzlez, The potential of data value speculation to boost ILP, Proceedings of the 12th international conference on Supercomputing, p.21-28, July 1998, Melbourne, Australia
Dean M. Tullsen , John S. Seng, Storageless value prediction using prior register values, ACM SIGARCH Computer Architecture News, v.27 n.2, p.270-279, May 1999
Chao-Ying Fu , Matthew D. Jennings , Sergei Y. Larin , Thomas M. Conte, Value speculation scheduling for high performance processors, ACM SIGOPS Operating Systems Review, v.32 n.5, p.262-271, Dec. 1998
Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed dynamic computation reuse: rationale and initial results, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.158-169, November 16-18, 1999, Haifa, Israel
Chia-Hung Liao , Jong-Jiann Shieh, Exploiting speculative value reuse using value prediction, Australian Computer Science Communications, v.24 n.3, p.101-108, January-February 2002
Mikio Takeuchi , Hideaki Komatsu , Toshio Nakatani, A new speculation technique to optimize floating-point performance while preserving bit-by-bit reproducibility, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Tarun Nakra , Rajiv Gupta , Mary Lou Soffa, Value prediction in VLIW machines, ACM SIGARCH Computer Architecture News, v.27 n.2, p.258-269, May 1999
Glenn Reinman , Brad Calder , Dean Tullsen , Gary Tyson , Todd Austin, Classifying load and store instructions for memory renaming, Proceedings of the 13th international conference on Supercomputing, p.399-407, June 20-25, 1999, Rhodes, Greece
Huiyang Zhou , Jill Flanagan , Thomas M. Conte, Detecting global stride locality in value streams, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Freddy Gabbay , Avi Mendelson, The effect of instruction fetch bandwidth on value prediction, ACM SIGARCH Computer Architecture News, v.26 n.3, p.272-281, June 1998
M. Burrows , U. Erlingson , S-T. A. Leung , M. T. Vandevoorde , C. A. Waldspurger , K. Walker , W. E. Weihl, Efficient and flexible value sampling, ACM SIGOPS Operating Systems Review, v.34 n.5, p.160-167, Dec. 2000
M. Burrows , U. Erlingson , S.-T. A. Leung , M. T. Vandevoorde , C. A. Waldspurger , K. Walker , W. E. Weihl, Efficient and flexible value sampling, ACM SIGPLAN Notices, v.35 n.11, p.160-167, Nov. 2000
Ben-Chung Cheng , Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed early load-address generation, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.138-147, November 1998, Dallas, Texas, United States
Brad Calder , Glenn Reinman , Dean M. Tullsen, Selective value prediction, ACM SIGARCH Computer Architecture News, v.27 n.2, p.64-74, May 1999
Martin Burtscher , Amer Diwan , Matthias Hauswirth, Static load classification for improving the value predictability of data-cache misses, ACM SIGPLAN Notices, v.37 n.5, May 2002
Freddy Gabbay , Avi Mendelson, Using value prediction to increase the power of speculative execution hardware, ACM Transactions on Computer Systems (TOCS), v.16 n.3, p.234-270, Aug. 1998
Peng-Sheng Chen , Yuan-Shin Hwang , Roy Dz-Ching Ju , Jenq Kuen Lee, Interprocedural Probabilistic Pointer Analysis, IEEE Transactions on Parallel and Distributed Systems, v.15 n.10, p.893-907, October 2004 | value-prediction;speculative execution;instruction-level parallelism |
266829 | Procedure placement using temporal ordering information. | Instruction cache performance is very important to instruction fetch efficiency and overall processor performance. The layout of an executable has a substantial effect on the cache miss rate during execution. This means that the performance of an executable can be improved significantly by applying a code-placement algorithm that minimizes instruction cache conflicts. We describe an algorithm for procedure placement, one type of code-placement algorithm, that significantly differs from previous approaches in the type of information used to drive the placement algorithm. In particular, we gather temporal ordering information that summarizes the interleaving of procedures in a program trace. Our algorithm uses this information along with cache configuration and procedure size information to better estimate the conflict cost of a potential procedure ordering. We compare the performance of our algorithm with previously published procedure-placement algorithms and show noticeable improvements in the instruction cache behavior. | Introduction
The linear ordering of procedures in a program's text segment fixes the addresses of each of these
procedures and this in turn determines the cache line(s) that each procedure will occupy in the
instruction cache. In the case of a direct-mapped cache, conflict misses result when the execution
of the program alternates between two or more procedures whose addresses map to overlapping
sets of cache lines. Several compile-time code-placement techniques have been developed that
use heuristics and profile information to reduce the number of conflict misses in the instruction
cache by a reordering of the program code blocks [5,6,7,8,11]. Though these techniques successfully
remove a sizeable number of the conflict misses when compared to the default code layout
produced during the typical compilation process, it is possible to do even better if we gather
improved profile information and consider the specifics of the hardware configuration. To this
end, we propose a method for summarizing the important temporal ordering information related
to code placement, and we show how to use this information in a machine-specific manner that
often further reduces the number of instruction cache conflict misses. In particular, we apply our
new techniques to the problem of procedure placement in direct-mapped caches, where the compiler
2achieves an optimized cache line address for each procedure by specifying the ordering of
the procedures and gaps between procedures in an executable.
Code-placement techniques may reorganize an application at one or more levels of granularity.
Typically, a technique focuses on the placement of whole procedures or individual basic blocks.
We use the term code block to refer to the unit of granularity to which a code-placement technique
applies. Though we focus on the placement of variable-sized code blocks defined by procedure
boundaries, our techniques for capturing temporal information and using this information during
placement apply to code blocks of any granularity.
The default code layout produced by most compilers places the procedures of an executable in the
same order in which they were listed in the source files and preserves the order of object files
from the linker command line. Therefore, it is left to chance which code blocks will conflict in the
cache. Whenever there are code blocks that are often executed together and happen to overlap in
the cache, they can cause a significant number of conflict misses in the instruction cache. Several
studies have shown that compile-time optimizations that change the relative placement of code
blocks can cause large changes in the instruction cache miss rate [3,4]. When changes in the relative
placement of code blocks occurs, the performance of the program will be affected not only by
the intended effect of the optimization, but also by the resulting change in instruction cache
misses. This makes it difficult to predict the total effect of optimizations which change the code
size and use the change in running time of the optimized executable to judge the effectiveness of
the optimizations. In summary, code-placement techniques are important because they improve
the performance of the instruction fetcher and because they enable the effective use of other compile-time
optimizations.
To reduce the instruction cache miss rate of an application, a code placement algorithm requires
two capabilities: it must be able to assign code blocks to the cache lines; and it must have information
on the relative importance of avoiding overlap between different sets of code blocks.
There are only a few ways for the compiler to set the addresses of a code block. The compiler can
manipulate the order in which procedures appear in the executable, and it can leave gaps between
two adjacent procedures to force the alignment of the next procedure at a specific cache line. The
more interesting problem is determining how procedures should overlap in the hardware instruction
cache.
The previous work on procedure placement has almost exclusively been based on summary profile
statistics that simply indicate how often a code block was executed. Often this information is
organized into a weighted procedure call graph (WCG) that records the number of calls that
occurred between pairs of procedures during a profiling run of the program. Figure 1 contains an
example of a WCG. This summary information is used to estimate the penalty resulting from the
placement of these procedure pairs in the same cache locations. The aim of most existing algorithms
is to place procedures such that pairs with high call counts do not conflict in the cache.
Counting the number of calls between procedures and summarizing this information in a WCG
provides a way of recognizing procedures that are temporally related during the execution of a
program. However, a WCG does not give us all the temporal information that we would like to
have. In particular, the absence of an edge between two procedures does not necessarily mean that
there is no penalty to overlapping the procedures. For example, the WCG in Figure 1 is produced
both when the condition cond alternates between true and false (Trace #1 in Figure 1) and when
Proc M()
loop 20-times
loop 4-times
if (cond)
call
else
call T;
endl
call Z;
endl
Figure
1. Example of a simple program that calls three leaf procedures. The weighted procedure
call graph is obtained when the condition cond is true 50% of the time. Notice that this same WCG
is obtained from both call traces given on the right of the figure.
(a) Example program (b) Weighted procedure call graph (c) Possible traces corresponding
to the WCG
Trace #1:
Trace #2:
the condition cond is true 40 times and then false 40 times (Trace #2). Assume for the purposes
of this example that all procedures in Figure 1 require only a single cache line and that we have
only three locations in our direct-mapped instruction cache. If one cache location is reserved for
procedure M, we clearly do not want the same code layout for the last two cache locations in both
these execution traces. Trace #1 experiences fewer cache conflict misses when procedures S and
T are each given distinct cache line (Z shares a cache line with S or T), while Trace #2 experiences
fewer cache conflict misses when procedures S and T share a cache line (Z is given its own
cache line). The WCG in Figure 1 does not capture the temporal ordering information that is
needed to determine which layout is best. A WCG summarizes only direct call information; no
precise information is provided on the importance of conflicts between siblings (as illustrated in
Figure
1) or on more distant temporal relationships.
To enable better code layout, we want to have a measure of how much the execution of a program
alternates between each pair of procedures (not just the pairs connected by an edge in the WCG).
We refer to this measure as temporal ordering information. With this and other information concerning
procedure sizes and the target cache configuration, we can make a better estimate of the
number of conflict misses experienced by any specific layout.
We begin in Section 2 with a brief description of a well-known procedure-placement algorithm
that sets a framework for understanding our new algorithm. We present the details of our algorithm
in Sections 3 and 4. Section 3 describes our method for extracting and summarizing the
temporal ordering information in a program trace, while Section 4 presents our procedure-place-
ment algorithm that uses the information produced by this method. In Section 5, we explain our
experimental methodology and present some empirical results that demonstrate the benefit of our
algorithm over previous algorithms for direct-mapped caches. Section 6 describes how to modify
our algorithm for set-associative caches. Finally, Section 7 reviews other related work in code lay-out
and Section 8 concludes.
Procedure placement from Pettis and Hansen
Current approaches to procedure placement rely on greedy algorithms. We can summarize the differences
between these algorithms by describing how
. selects the order in which procedures are considered for placement; and
. determines where to place each procedure relative to the already-placed procedures.
We begin with a description of the well-known procedure-placement algorithm by Pettis and
Hansen [8]. As we will explain, our new algorithm retains much of the structure and many of the
important heuristics found in the Pettis and Hansen approach. In addition to procedure placement,
Pettis and Hansen also address the issues of basic-block placement and branch alignment. For the
purposes of this paper, we use the acronym PH when referring to our implementation of the procedure
placement portion of their algorithm.
Pettis and Hansen reduce instruction-cache conflicts between procedures by placing the most frequent
caller/callee procedure pairs at adjacent addresses. Their approach is based on a WCG summary
of the profile information. They use this summary information both to select the next
procedure to place and to determine where to place that procedure in relationship to the already-
placed procedures.
For our implementation of PH, we produce an undirected graph with weighted edges, which contains
essentially the same information as a WCG. There is one node in the graph for each procedure
in the program. An edge e p,q connects two nodes p and q if p calls q or q calls p. The weight
W(e p,q ) given to e p,q is equal to the (2 * (calls p,q + calls q,p )), where calls p,q is the total number of
calls made from procedure p to procedure q. We collect this information by scanning an instruc-
tion-address trace of the program and noting each transition between procedures, which counts
both calls and returns. Therefore, we get an edge weight that is twice the number of calls; the
extra factor of two does not change the procedure placement produced by PH.
This graph is used to select both the next procedure to place and determine the relative placement
for this procedure. PH begins by making a copy of this initial graph; we refer to this copy as the
working graph. PH searches this working graph for the edge with the largest weight. Call this
edge e u,v . Once this edge is found, the algorithm merges the two nodes u and v into a single node
u' in the working graph (more details in a moment). The remaining edges from the original nodes
u and v to other nodes become edges of the new node u'. To maintain the invariant of a single edge
between any pairs of nodes, PH combines each pair of edges e u,r and e v,r into a single edge e u',r
with weight (W(e u,r )). The algorithm then repeats the process, again searching for the
edge with the largest weight in the working graph, until the graph is reduced to a single node.
In PH, the only way of reducing the chance of a conflict miss between procedures is proximity in
the address space: the closer that we place two procedures in the address space, the less likely it is
that they will conflict. The procedures within a node are organized as a linear list called a chain
[8]. When PH merges two nodes, their chains can be combined into a single chain in four ways.
Let A and B represent the chains, and A' and B' the reverse of each chain. The four possibilities
are AB, AB', A'B and A'B'. To choose the best one of these, PH queries the original graph to
determine the edge e with the largest weight between a procedure p in the first chain and a procedure
q in the second chain. Our implementation of PH chooses the merged chain that minimizes
the distance (in bytes) between p and q.
3 Summarizing temporal ordering information
Any algorithm that aims to optimize the arrangement of code blocks needs a conflict metric which
quantifies the importance of avoiding conflicts between sets of code blocks. Ideally, the metric
would report the number of cache conflict misses caused by mapping a set of code blocks to overlapping
cache lines. We do not expect to find a metric that gives the exact number of resulting
cache conflict misses, and we do not need one. We simply need the metric to be a linear function
of number of conflict misses. 1 Section 5.3 shows that the metric used in our algorithm exhibits
strong correlation with the instruction cache miss rate.
1. Clearly, any difference between the training and testing data sets will also affect the metric's ability to predict
cache conflict misses in the testing run.
As discussed in the previous section, PH uses the call-graph edge weight W(p,q) between two procedures
and q as its conflict metric. This simple metric drives the merging of nodes. Unfortu-
nately, this metric has several drawbacks, as illustrated in Section 1.
To understand how to build a better conflict metric, it is helpful to review the actions of a cache
when processing an instruction stream. Assume for a moment that we are tracking code blocks
with a size equal to the size of a cache line. For a direct-mapped cache, a code block b maps to
cache line l = (Addr(b) DIV line_size) MOD cache_lines. This code block remains in the cache
until another code block maps to the same cache line. In terms of code layout, it is important
therefore to note which other code blocks are referenced temporally nearby to a reference to b.
Ideally, none of the blocks referenced between consecutive references to b map to the same cache
line as b. In this way, we get reuse of the initial fetch of block b and do not experience a conflict
miss during the second reference to b.
Since the reuse of a code block can be prevented by a single other code block in direct-mapped
caches, we construct a data structure that summarizes the frequency of alternating between two
code blocks. It is convenient to build this data structure as a weighted graph, where the nodes represent
individual code blocks. We refer to this graph as a temporal relationship graph (TRG). As
in PH, the conflict metric is simply the edge weight e p,q between two nodes p and q. A TRG is
more general than a call graph because it can contain edges connecting any pair of code blocks for
which there is some interleaving during the program execution. The rest of this section describes
the process by which we build a TRG, and the next section explains how we use the resulting
TRG to place procedures.
To construct a summary of the temporal locality of the code blocks in a trace, we analyze the set
of recently-referenced code blocks at each transition between code blocks. By implementing an
ordered set, Q, of code-block identifiers (e.g. procedure names) ordered as they appeared in the
trace, we always have access to a history of the recently-referenced code blocks. There is a bound
on the maximum size of Q because its entries eventually become irrelevant and can be removed.
There are two ways in which a code block identifier p can become irrelevant. First, we need only
the latest occurrence of p in Q. Any code blocks that are executed after the most recent occurrence
of p can only have an effect on that occurrence, but not on an earlier occurrence of p. Second, p
can become irrelevant if a sufficiently large amount of (unique) code has been executed since p's
last occurrence and evicted p from the cache. Let T be the set of code block identifiers reached
since the last reference to p. Let S(T) be the sum of the sizes of the code blocks referenced in T.
Exactly how big S(T) needs to grow before p becomes irrelevant depends on the cache mapping of
the code. Assuming that the code layout maximizes the reuse of the members of T, they will be
mapped to non-overlapping addresses, and their cache footprint will be equal to S(T). Therefore, p
becomes irrelevant when S(T) is greater than the cache size. In summary, we perform the following
steps when inserting a new element p into our ordered set Q. First, place p at the most recent
end of Q. If there is a previous occurrence of p in Q, remove it. If not, we remove the oldest members
of Q until the removal of the next least-recently-used identifier would cause the total size (in
bytes) of remaining code blocks in Q to be less than the cache size.
To build a TRG, we process the trace one code block identifier at a time. At each processing step,
contains a set of code blocks that have temporal locality. The more often that a set of code
blocks appears in Q, the more important it is that they not occupy the same cache locations. For
each code block identifier p that we remove from the trace, we update the TRG as follows. For
every code block q in Q starting from the most-recent end of Q, we increment the weight on the
edge e p,q . If node p does not exist, we create it; if the edge e p,q does not exist, we create it with a
weight of 1. We continue down Q until we reach previous occurrence of p or the end of Q. We
stop when we encounter previous occurrence of p because this indicates a reuse. Any code blocks
temporarily referenced before this previous occurrence of p in Q could not have displaced p from
the instruction cache. Once we have collected the relationship data for p, we insert p into Q as
described above. The process then repeats until we have processed the entire trace. After process-
ing, we are left with a TRG whose edge weights W(e p,q ) record the number of times p and q
occurred within a sufficiently small temporal distance to be present in Q at the same time, independent
of how p and q are related in the program's call graph.
4 Our placement algorithm
Given the discussion in Sections 2 and 3, it should be clear that we could use TRG constructed in
Section 3 within the procedure-placement algorithm described by Pettis and Hansen [8]. We have
found however that extra temporal ordering information alone is not sufficient to guarantee lower
instruction cache miss rates. To get consistent improvements, we also make two key changes to
the way we determine where to place each procedure relative to the already-placed procedures.
The first involves the use of procedure size and cache configuration information that allows us to
make a more informed procedure-placement decision. The second involves the gathering of temporal
ordering information at a granularity finer than the procedure unit; we use this more detailed
information to overcome problems created by procedures that are larger than the cache size. For
efficiency reasons, we also consider only popular (i.e. frequently executed) procedures during the
building of a relationship graph, as was proposed by Hashemi et al. [5].
The rest of this section outlines our procedure-placement algorithm. Section 4.1 begins with a
description of the TRGs required for our algorithm and how we iterate through the procedure list
selecting the order in which procedure processed by the main outer loop. Section 4.2 focuses on
the portion of our algorithm's main loop that places a procedure relative to the procedures already
processed using cache configuration and procedure size information. This placement decision
simply specifies a cache-relative alignment among a set of procedures. The determination of each
procedure's starting address (i.e. its placement in the linear address space) occurs only after all
popular procedures have been processed. Section 4.3 presents the details of this process.
4.1 TRGs and the main outer loop
Our algorithm uses two related TRGs. One selects the next procedure to be placed (TRG select );
and other aids in the determination of where to place this selected procedure (TRG place ). In PH,
these two graphs are initially the same. In our algorithm, the graphs differ in the granularity of the
code blocks processed during TRG build. While a code block in TRG select corresponds to a whole
procedure, a code block in TRG place corresponds to a chunk of a procedure. For our benchmarks,
we have found that "chunking" procedures into 256-byte pieces works well. TRG place therefore
contains nodes for each procedure p in a program. It is straightforward
to modify the algorithm in the previous section to generate both TRGs simultaneously.
Though we record temporal information concerning the parts of procedures, our procedure-place-
ment algorithm places only whole procedures. We use the finer-grain information only to find the
best relative alignment of the whole procedures as explained below.
Though TRG select contains more edges per node than the relationship graph built in PH (due to
the additional temporal ordering information), we process TRG select in exactly the same greedy-
merging manner as the relationship graph discussed in Section 2. Though we tried several other
methods for creating an order to select procedures for placement, we could not find a more robust
heuristic (or one that was as simple and elegant). The only other difference in our "working" relationship
graph is that TRG select contains only popular procedures. Section 4.3 discusses how we
place the remaining unpopular procedures.
4.2 Determining cache-relative alignments
In PH, the data structure for the nodes in the working graph is a linear list (or a chain) of the pro-
cedures. The building of a chain is more restrictive in terms of selecting starting addresses for
placed procedures than it needs to be however. The only constraint that we need to maintain is that
the placed procedures are mapped to addresses that result in a cache layout with a small conflict
cost. We explain how exactly we calculate the cost of a placement in a moment. So, instead of
chains, we use a data structure for nodes in TRG select that comprises of a set of tuples. Each tuple
consists of a procedure identifier and an offset, in cache lines, of the beginning of this procedure
from the beginning of the cache. For a node containing only a single procedure, the offset is zero.
When two nodes, each containing a single procedure, are merged together, our algorithm modifies
the offset of the second procedure to ensure that the cost metric of the placement of these two procedures
in the cache is minimized. The algorithm in Figure 2 presents the pseudo-code for the
merging of two nodes containing any number of already-placed procedures.
Three items are note-worthy concerning the merge_nodes routine in Figure 2. First, when we
merge two nodes, we leave the relative alignment of all the procedures within each node
Figure
2. Pseudo-code for the merging of two nodes from the temporal relationship graph RG select . Procedure
chunks within a node are identified by unique id's. An offset for a chunk id records the cache-line index corresponding
to the beginning of that chunk. Offsets are always in units of cache lines.
array [#_cache_lines] of {id, .};
merge_nodes (NODE n1, NODE n2) {
// Initialize cache array c1 by marking each line with the
procedure-chunk id's from node n1 occupying that line.
foreach (id, offset) pair p in n1
for {
int
foreach (id,offset) pair p in
p.offset += best_offset;
return (n1 -
unchanged. We do not backtrack and undo any previous decisions. Though the ability to rearrange
the entire set of procedures in the two nodes being merged might lead to a better layout, this flexibility
would noticeably increase the computational complexity of the algorithm. We assume that
the selection order for procedure placement has guaranteed that we have already avoided the most
expensive, potential cache conflicts. As our experimental results show, this greedy heuristic
works quite well in practice. It is an open research question if limited amounts of backtracking
could improve upon the layouts found by our current approach.
Second, merge_nodes calculates a cost metric for each potential alignment of the layout in the
first node with respect to the layout in the second node. If we fix the layout of the first node to
begin at cache line 0, we can offset the start of the second node's layout by any number between 0
and the number of lines in the cache. We evaluate each of these relative offsets using the fine-grained
temporal information in TRG place . For a given offset, we compute, for each procedure
piece in the first node, which procedure pieces in the second node overlap with it in the cache. For
each pair of overlapping procedure pieces, we compute the estimated number of cache conflicts
corresponding to this overlap by accessing the weight on the edge (if any) between these two procedure
pieces in TRG place . We then sum all of these estimates to obtain the total estimate for this
potential placement. We calculate the estimate for procedure-piece conflicts only between nodes
(and not the intra-node conflicts between procedure pieces) because we want the incremental cost
of the placement. The cost of the intra-node overlaps are fixed and will not change the ultimate
finding. The calculation of this extra cost would only increase the work done by our algorithm.
Third, if the cost-metric calculation produces several relative offsets with the same cost, our algorithm
selects the first of these offsets. In the simplest case, if we merge two nodes each containing
a single procedure (call them p and q) and the total size of these two procedures is less than the
cache size, the merging of these nodes will result in a node that is equivalent to the chain created
by PH. In other words, merge_nodes selects the first empty cache line after procedure p to begin
procedure q since that is the first zero-cost location for q.
4.3 Producing the final linear list
The merging phase of our algorithm ends when there are no more edges left in TRG select . 2 The
final step in our algorithm produces a linear arrangement of all of the program procedures given
the relative alignment decisions contained in remaining TRG select nodes. To begin, we select a
procedure p with an cache-line offset of 0. 3 This is the first procedure in our linear layout. To find
the next procedure in the linear layout, we search the nodes for a procedure q whose cache-rela-
tive offset results in the smallest positive gap in cache lines between the end of p and the start of q.
To understand the general case, assume that procedure p is the last procedure in the linear layout.
If p ends at the cache-relative offset pEndLine, we choose a procedure q which starts at cache-rel-
ative offset qStartLine as the next procedure in the linear layout if q produces the smallest positive
value for gap among all unconsidered popular procedures, where
Finally, whenever we produce a gap between two popular procedures, we search the unpopular
procedures for one that fits in the gap. Once we determine an address for each popular procedures
in the linear address space, we simply append any remaining un-placed, unpopular procedures to
the end of our linear list.
5 Experimental evaluation
In this section, we compare three different procedure-placement algorithms. In addition to PH and
our algorithm (GBSC), we present results for a recently published procedure-placement algo-
rithm, an algorithm by Hashemi, Kaeli, and Calder [5] which we refer to as HKC. Like our algo-
rithm, HKC also extends PH to use knowledge of the procedure sizes, the cache size, and the
cache organization. HKC uses a weighted call graph but not any additional temporal information.
The key advantage of HKC over PH is that HKC records the set of cache lines occupied by each
2. Unlike PH, our "working" graph, TRG select , is not necessarily reduced to a single node. TRG select contains
only popular procedures, and it is possible to have the only connection between two popular procedures be
through an unpopular procedure.
3. This assumes that the start of the text segment maps to cache-line 0. If not, it is easy to adjust the algorithm.
gap qStartLine pEndLine
qStartLine numCacheLines pEndLine
procedure during placement, and it tries to prevent overlap between a procedure and any of its
immediate neighbors in the call graph.
We begin in Section 5.1 with some aspects of the behavior of code placement techniques that need
to be addressed in order to make a meaningful comparison of different algorithms. In particular,
we introduce an experimental methodology based on randomization techniques. Section 5.2 outlines
our experimental methodology while Section 5.3 presents our results.
5.1 Evaluating the performance of code placement algorithms
We normally expect code optimizations to behave similar to a continuous function: small changes
in the behavior of the optimization cause small changes in the performance of the resulting exe-
cutable. With code placement optimizations, this is often not the case: small changes in the layout
of a program can cause dramatic changes in the cache miss rate.
As an example, we simulated the instruction cache behavior of the SPECint95 perl program for
two slightly different layouts. The first layout is the output of our own code layout algorithm, and
the second layout is identical to the first except that each procedure is padded by an additional
bytes (one cache line) of empty space at its end. The instruction cache miss rate changed from
3.8% for the first layout to 5.4% for the second layout; this is a remarkable change for such a trivial
difference between the layouts. In fact, it is possible to introduce a large number of misses by
moving one code block by only a single cache line.
For greedy code-layout algorithms, we have the additional problem that different layouts, in fact
substantially different layouts, often result from small changes in the input profile data. At each
step, PH, HKC, and GBSC greedily choose the highest-weight edge in the working graph. If there
are two edges, say with weight 1,000,000 and 1,000,001, the (barely) larger edge will always be
chosen first, even though such a small difference is unlikely to represent a statistically significant
basis for preferring one edge over the other. Worse, ties resulting from identical edge weights are
decided arbitrarily. Decisions between two equally good alternatives, which must necessarily be
made one way or the other, affect not only the current step of the algorithm, but all future steps.
As a result, we find it difficult to draw conclusions about the relative performance of different
code layout algorithms from a small number of program traces. Ideally, we would like to have a
large enough set of different inputs for each benchmark to get an accurate impression of the distribution
of results. Unfortunately, this is very hard to do in practice since common benchmark
suites are not distributed with more than a handful of input sets for each benchmark application.
We simulate the effect of many slightly different application input sets by first running the application
with a single input, and then applying random perturbations to the resulting profile data.
For the algorithms in our comparison, we perturb all of our weighted graphs by multiplying each
edge weight by a value close to one. Specifically, the initial weight w is replaced by the perturbed
weight according to the equation , where X is a random variable, normally
distributed with mean 0 and variance 1, and s is a scaling factor which determines the magnitude
of the random perturbations. Using multiplicative rather than additive noise is attractive for
two reasons. First, additive noise can cause weights to become negative, for which there is no
obvious interpretation. Second, the method is inherently self-scaling in the sense that reasonable
values for s are independent of the initial edge weights.
A large enough value for s will cause the layout to be effectively random, as the perturbed graphs
will bear little relationship to the profile data. Low values of s will cause only statistically insignificant
differences in edge weights, and we can then observe the range of results produced by
these small changes. We use in our experiments. Blackwell [2] shows that for several
code placement algorithms, values of s as low as 0.01 elicit most of the range of performance variation
from the system, and that values of s as high as 2.0 do not degrade the average performance
very much.
exp
5.2 Methodology
We have implemented the PH, HKC, and GBSC procedure-placement algorithms such that they
can be integrated into one of two different environments: a simulation environment based on
ATOM [10]; and a compiler environment based on SUIF [9]. The results in Section 5.3 are based
on the ATOM environment, but we have used the SUIF environment to verify that our algorithms
produce runnable, correct code.
Table
1 lists the benchmarks used in our study. Except for ghostscript, they are all from the
SPECint95 benchmark suite. We use only five of the eight SPECint95 benchmarks because the
other three (compress, ijpeg, and xlisp) are uninteresting in that all have small instruction working
sets that do equally well under any reasonable procedure-placement algorithm. We compiled go
and perl using the SUIF compiler (version 1.1.2), while all other benchmarks were compiled
using gcc 2.7.2 with the -O2 optimization flag. We chose the input data sets to keep the traces to a
manageable size. All of the reported miss rates in this and the next section are based on the simulation
of an 8 kilobyte direct-mapped cache with a line size of 32 bytes. We use the training input
to drive the procedure-placement algorithms, and then simulate the instruction-cache performance
of the resulting optimized executable using the testing input.
5.3 Results
The graphs in Figure 3 show our experimental results for PH, HKC, and GBSC. Each graph
shows the results for a single benchmark. For each of the three algorithms, there is a curve showing
the cumulative distribution of results over a set of 20 experiments, all based on the same training
and testing traces. As described in Section 5.1, we use randomization to obtain twenty slightly
different WCGs or TRGs that result in slightly different placements. For each point along a curve,
the X-coordinate is the cache miss rate for one of the placements, and the Y-coordinate gives the
percentage of all placements that had an equal or better miss rate. Consequently, if the curve for
one algorithm is to the left of the curve for another algorithm, then the first algorithm gives better
results. We notice that our algorithm gives clearly better results than the other two for all benchmarks
7except for m88ksim and perl. For these two benchmarks, the ranges of results overlap,
though GBSC yields the lowest average miss rate over all placements. In summary, these results
demonstrate the benefits of using temporal ordering information as well as an algorithm that considers
cache-relative alignments in placing code.
In Section 3, we said that a useful conflict metric should be strongly correlated with the number of
cache misses. Figure 4 examines this issue by showing the relationship between conflict-metric
values and cache miss rates. Each plot in Figure 4 contains 80 points, where each point corresponds
to a different placement of the go benchmark. These placements are based on the GBSC
algorithm; however we varied the output of this algorithm to produce a placement with a range of
different miss rates. We accomplished this by randomly selecting 0-50 procedures in the GBSC
placement and randomly changing their cache-relative offsets. The metric value plotted corresponds
to the resulting placement. Figure 4a shows that our conflict metric, based on the fine-grained
information in TRG place , shows a linear relationship with the actual number of cache
misses; all the points in the graph are close to the diagonal. On the other hand, Figure 4b shows
that a metric based only on a WCG is not always a good predictor of cache misses.
Program
Name
All procedures Popular
procedures Training trace Testing trace Miss
rate of
default
layout
Avg. size
of
procedure
history
size count size count input
description length input
description length
go 590 K 3221 134 K 112 11x11 board,
level 4, no
stones
level 6, 4
stones
ghostscript 1817 K 372 104 K 216 14-page pre-
sentation
37 M 3-page paper 38 M 2.63 8.9
limited to
50M BBs
limited to
50M BBs
50 M 2.92 14.3
perl 664 K 271
reduced dic-
tionary
reduced input
file
vortex 1073 K 923 117 K 156 persons.250,
reduced iteration
reduced iteration
Table
1: Details of our benchmark applications. We report sizes in bytes and trace lengths in basic blocks. A
benchmark's ``average size of procedure history'' reports the average number of procedures that were
present in our ordered set Q during the building of the TRG.
Figure
3. Instruction cache miss rates for our benchmarks. Each graph shows the distribution of miss rates
corresponding to the layouts produced by PH, HKC, and our new procedure-placement algorithm (GBSC).
Each data point in the graphs represents the result for a single placement. Cache miss rates vary along the
x-axis, and the y-axis shows the cumulative distribution of miss rates.0.20.61
"gc.PH"
"gc.HKC"
"gc.GBSC"
(a) gcc0.20.61
"gs.PH"
"gs.HKC"
"gs.GBSC"
(b) ghostscript0.20.61
"go.PH"
"go.HKC"
"go.GBSC"
(c) go (d) m88ksim0.20.61
"m8.PH"
"m8.HKC"
"m8.GBSC"
"pl.PH"
"pl.HKC"
"pl.GBSC"0.20.61
"vo.PH"
"vo.HKC"
"vo.GBSC"
(f) vortex
6 Extensions for set-associative caches
To this point, we have described a technique for collecting and using temporal information that is
specific to direct-mapped cache implementations. In other words, we have assumed that a single
occurrence of a procedure q between two occurrences of a procedure p is sufficient to displace p.
This assumption is not necessarily true for set-associative caches, especially for those that implement
a LRU policy. To use our approach for set-associative caches, we construct a slightly different
data structure that replaces TRG place , and we slightly modify the cost-metric calculation in
merge_nodes. This section focuses on 2-way set-associative caches; the implementation of
changes for other associativities follows directly from this explanation.
Instead of a graph representation for TRG place , it is now more convenient to think of the temporal-
relationship structure as a database D that records the number of times that a code-block pair {r,s}
appears between consecutive occurrences of another code block p in a program trace. We can still
use our ordered set approach to build this database. However, when we process the temporal associations
related to the next code block p in the trace, we associate p with all possible selections of
two identifiers from the identifiers currently in Q (up to any previous occurrence of p as before).4812
Figure
4. Correlation between conflict metric and cache misses. Data points are 80 randomized layouts for the
go benchmark. The X-coordinate of a point is the cache miss rate for that layout, and its Y-coordinate is the
sum of the conflict metrics for the indicated method over the entire placement.
cache miss rate
conflict
estimate
(millions)
(a) Conflict metric based on a fine-grained TRG. (b) Conflict metric based on a WCG.4812
cache miss rate
conflict
estimate
(millions)
We do this because two unique references are required to guarantee no reuse. Thus, the database
simply records the frequency of each association between p and the pair {r,s}, accessed as
D(p,{r,s}). If r, s, and p all occupy the same set in a two-way set-associative cache, then we estimate
that D(p,{r,s}) of the program references to p will result in cache conflicts due to the displacement
of p by intervening references to both r and s. We access this information instead of
TRG place edge weights during the conflict-metric calculation in merge_nodes. Clearly the inner-loop
of this calculation must also change slightly so that we can check the cost of the association
between a code block in node n1 against all pairs of code blocks in n2 and vice-versa.
Though we change TRG place , we do not change TRG select . TRG select is only a heuristic for selecting
the order of code blocks to be placed; it is not obviously affected by a cache's associativity. As
we mentioned earlier, other heuristic approaches may work better, but we have not found one.
7 Discussion and related work
Much of the prior work in the area of compile-time code placement is related to early work in
reducing the frequency of page faults in the virtual memory system and more recent work at
reducing the cost of pipeline penalties associated with control transfer instructions. However, we
limit our discussion here to studies that directly address the issue of code placement aimed at
reducing instruction cache conflict misses. Some of the earliest work in this area was done by
Hwu and Chang [6], McFarling [7], and Pettis and Hansen [8]. Hwu and Chang use a WCG and a
proximity heuristic to address the problem of basic-block placement. Their approach is unique in
that they also perform function inline expansion during code placement to overcome the artificial
barriers imposed by procedure call boundaries.
McFarling [7] uses an interesting program representation (a DAG of procedures, loops, and condi-
tionals) to drive his code-placement algorithm, but the profile information is still summarized in
such a way that some of the temporal interleaving of blocks in the trace is lost. In fact, this paper
explicitly states that, because he is unable to collect temporal interleaving information, his algorithm
assumes and optimizes for a worst-case interleaving of blocks. Like our algorithm, McFarling
1does consider the cache size and its modulo property when evaluating potential layouts, but
his cost calculation is obviously different from ours. Finally, his algorithm is unique in its ability
to determine which portions of the text segment should be excluded from the instruction cache.
Torellas, Xia, and Daigle [11] propose a code-placement technique for kernel-intensive applica-
tions. Their algorithm considers the cache address mapping when performing code placement.
They define an array of logical caches, equal in size and address alignment to the hardware cache.
Code placed within a single logical cache is guaranteed never to conflict with any other code in
that logical cache. Though there is sub-area of all logical caches that is reserved for the most frequently-executed
basic blocks, there is no general mechanism for calculating the placement costs
across different logical caches. Their code placement is guided by execution counts of edges
between basic blocks, and therefore does not capture temporal ordering information.
The history mechanism we use to analyze the temporal behavior of a trace is similar to the problem
of profiling paths in a procedure call graph. Ammons et al. [1] describe a way of implementing
efficient path profiling. However, the data structure generated by this technique cannot be used
in the place of our TRG, because it does not capture sufficient temporal ordering information.
8 Conclusion
We have presented a method for extracting temporal ordering information from a trace. We then
described a procedure-placement algorithm that uses this information along with the knowledge
of the cache lines each procedure occupies to predict accurately which placements will result in
the least number of conflict misses. The results show that these two factors combined allow us to
obtain better instruction cache miss rates than previous procedure-placement techniques. Other
code-placement techniques, such as "fluff removal" [8] and branch alignment [12], are orthogonal
to the problem of placing whole procedures and can therefore be combined with our technique to
achieve further improvements. The success of our experiments indicates that it is worthwhile to
continue research on the temporal behavior of applications. In particular, we plan to develop similar
techniques to optimize the behavior of applications in other layers of the memory hierarchy.
9
--R
"Exploiting Hardware Performance Counters with Flow and Context Sensitive Profiling,"
"Applications of Randomness in System Performance Measurement."
"The Effect of Code Expanding Optimizations on Instruction Cache Design,"
"Performance Issues in Correlated Branch Prediction Schemes,"
"Efficient Procedure Mapping Using Cache Line Coloring,"
"Achieving High Instruction Cache Performance with an Optimizing Compiler,"
"Program Optimization for Instruction Caches,"
"Profile Guided Code Positioning,"
"Extending SUIF for Machine-dependent Optimizations,"
"ATOM: A System for Building Customized Program Analysis Tools,"
"Optimizing Instruction Cache Performance for Operating System Intensive Workloads,"
"Near-Optimal Intraprocedural Branch Alignment,"
--TR
Program optimization for instruction caches
Achieving high instruction cache performance with an optimizing compiler
Profile guided code positioning
ATOM
Performance issues in correlated branch prediction schemes
Exploiting hardware performance counters with flow and context sensitive profiling
Efficient procedure mapping using cache line coloring
Near-optimal intraprocedural branch alignment
Optimizing instruction cache performance for operating system intensive workloads
Applications of randomness in system performance measurement
--CTR
Christophe Guillon , Fabrice Rastello , Thierry Bidault , Florent Bouchez, Procedure placement using temporal-ordering information: Dealing with code size expansion, Journal of Embedded Computing, v.1 n.4, p.437-459, December 2005
Keoncheol Shin , Jungeun Kim , Seonggun Kim , Hwansoo Han, Restructuring field layouts for embedded memory systems, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Alex Ramrez , Josep-L. Larriba-Pey , Carlos Navarro , Josep Torrellas , Mateo Valero, Software trace cache, Proceedings of the 13th international conference on Supercomputing, p.119-126, June 20-25, 1999, Rhodes, Greece
Alex Ramirez , Josep Ll. Larriba-Pey , Carlos Navarro , Mateo Valero , Josep Torrellas, Software Trace Cache for Commercial Applications, International Journal of Parallel Programming, v.30 n.5, p.373-395, October 2002
John Kalamatianos , Alireza Khalafi , David R. Kaeli , Waleed Meleis, Analysis of Temporal-Based Program Behavior for Improved Instruction Cache Performance, IEEE Transactions on Computers, v.48 n.2, p.168-175, February 1999
Young , Michael D. Smith, Better global scheduling using path profiles, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.115-123, November 1998, Dallas, Texas, United States
Rakesh Kumar , Dean M. Tullsen, Compiling for instruction cache performance on a multithreaded architecture, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
S. Bartolini , C. A. Prete, Optimizing instruction cache performance of embedded systems, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.4, p.934-965, November 2005
Architectural and compiler support for effective instruction prefetching: a cooperative approach, ACM Transactions on Computer Systems (TOCS), v.19 n.1, p.71-109, Feb. 2001
Trishul M. Chilimbi , Ran Shaham, Cache-conscious coallocation of hot data streams, ACM SIGPLAN Notices, v.41 n.6, June 2006
Alex Ramirez , Luiz Andr Barroso , Kourosh Gharachorloo , Robert Cohn , Josep Larriba-Pey , P. Geoffrey Lowney , Mateo Valero, Code layout optimizations for transaction processing workloads, ACM SIGARCH Computer Architecture News, v.29 n.2, p.155-164, May 2001
Chandra Krintz , Brad Calder , Han Bok Lee , Benjamin G. Zorn, Overlapping execution with transfer using non-strict execution for mobile programs, ACM SIGOPS Operating Systems Review, v.32 n.5, p.159-169, Dec. 1998
Stephen S. Brown , Jeet Asher , William H. Mangione-Smith, Offline program re-mapping to improve branch prediction efficiency in embedded systems, Proceedings of the 2000 conference on Asia South Pacific design automation, p.111-116, January 2000, Yokohama, Japan
Alex Ramirez , Josep L. Larriba-Pey , Mateo Valero, Software Trace Cache, IEEE Transactions on Computers, v.54 n.1, p.22-35, January 2005
Trishul M. Chilimbi, Efficient representations and abstractions for quantifying and exploiting data reference locality, ACM SIGPLAN Notices, v.36 n.5, p.191-202, May 2001
Alex Ramirez , Oliverio J. Santana , Josep L. Larriba-Pey , Mateo Valero, Fetching instruction streams, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Young , Michael D. Smith, Static correlated branch prediction, ACM Transactions on Programming Languages and Systems (TOPLAS), v.21 n.5, p.1028-1075, Sept. 1999
Ann Gordon-Ross , Frank Vahid , Nikil Dutt, A first look at the interplay of code reordering and configurable caches, Proceedings of the 15th ACM Great Lakes symposium on VLSI, April 17-19, 2005, Chicago, Illinois, USA
Brad Calder , Chandra Krintz , Simmi John , Todd Austin, Cache-conscious data placement, ACM SIGPLAN Notices, v.33 n.11, p.139-149, Nov. 1998
Timothy Sherwood , Brad Calder , Joel Emer, Reducing cache misses using hardware and software page placement, Proceedings of the 13th international conference on Supercomputing, p.155-164, June 20-25, 1999, Rhodes, Greece
Thomas Kistler , Michael Franz, Automated data-member layout of heap objects to improve memory-hierarchy performance, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.3, p.490-505, May 2000
Murali Annavaram , Jignesh M. Patel , Edward S. Davidson, Call graph prefetching for database applications, ACM Transactions on Computer Systems (TOCS), v.21 n.4, p.412-444, November
Rajiv A. Ravindran , Pracheeti D. Nagarkar , Ganesh S. Dasika , Eric D. Marsman , Robert M. Senger , Scott A. Mahlke , Richard B. Brown, Compiler Managed Dynamic Instruction Placement in a Low-Power Code Cache, Proceedings of the international symposium on Code generation and optimization, p.179-190, March 20-23, 2005
Nikolas Gloy , Michael D. Smith, Procedure placement using temporal-ordering information, ACM Transactions on Programming Languages and Systems (TOPLAS), v.21 n.5, p.977-1027, Sept. 1999
Martha Mercaldi , Steven Swanson , Andrew Petersen , Andrew Putnam , Andrew Schwerin , Mark Oskin , Susan J. Eggers, Modeling instruction placement on a spatial architecture, Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, July 30-August 02, 2006, Cambridge, Massachusetts, USA
S. Bartolini , C. A. Prete, A proposal for input-sensitivity analysis of profile-driven optimizations on embedded applications, ACM SIGARCH Computer Architecture News, v.32 n.3, p.70-77, June 2004
Trishul M. Chilimbi , Mark D. Hill , James R. Larus, Cache-conscious structure layout, ACM SIGPLAN Notices, v.34 n.5, p.1-12, May 1999
Sangwook P. Kim , Gary S. Tyson, Analyzing the working set characteristics of branch execution, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.49-58, November 1998, Dallas, Texas, United States
Mahmut Kandemir, Compiler-Directed Collective-I/O, IEEE Transactions on Parallel and Distributed Systems, v.12 n.12, p.1318-1331, December 2001
Thomas Kistler , Michael Franz, Continuous program optimization: A case study, ACM Transactions on Programming Languages and Systems (TOPLAS), v.25 n.4, p.500-548, July | profiling;conflict misses;code layout |
266830 | Predicting data cache misses in non-numeric applications through correlation profiling. | To maximize the benefit and minimize the overhead of software-based latency tolerance techniques, we would like to apply them precisely to the set of dynamic references that suffer cache misses. Unfortunately, the information provided by the state-of-the-art cache miss profiling technique (summary profiling) is inadequate for references with intermediate miss ratios - it results in either failing to hide latency, or else inserting unnecessary overhead. To overcome this problem, we propose and evaluate a new technique - correlation profiling - which improves predictability by correlating the caching behavior with the associated dynamic context. Our experimental results demonstrate that roughly half of the 22 non-numeric applications we study can potentially enjoy significant reductions in memory stall time by exploiting at least one of the three forms of correlation profiling we consider. | Introduction
As the disparity between processor and memory speeds continues to grow, memory latency is becoming an
increasingly important performance bottleneck. Cache hierarchies are an essential step toward coping with
this problem, but they are not a complete solution. To further tolerate latency, a number of promising
software-based techniques have been proposed. For example, the compiler can tolerate modest latencies by
scheduling non-blocking loads early relative to when their results are consumed [12], and can tolerate larger
latencies by inserting prefetch instructions [7, 9].
While these software-based techniques provide latency-hiding benefits, they also typically incur runtime
overheads. For example, aggressive scheduling of non-blocking loads increases register lifetimes which can
lead to spilling, and software-controlled prefetching requires additional instructions to compute prefetch
addresses and launch the prefetches themselves. While the benefit of a technique typically outweighs its
overhead whenever a miss is tolerated, the overhead hurts performance in cases where the reference would
have enjoyed a cache hit anyway. Therefore to maximize overall performance, we would like to apply a
latency-tolerance technique only to the precise set of dynamic references that would suffer misses. While
previous work has addressed this problem for numeric codes [9], this paper focuses on the more difficult but
important case of isolating dynamic miss instances in non-numeric applications.
1.1 Predicting Data Cache Misses in Non-Numeric Codes
To overcome the compiler's inability to analyze data locality in non-numeric codes, we can instead make
use of profiling information. One simple type of profiling information is the precise miss ratios of all static
memory references. Throughout the remainder of this paper, we will refer to this approach as summary
profiling, since the miss ratio of each memory reference is summarized as a single value.
If summary profiling indicates that all significant memory reference instructions (i.e. those which are
executed frequently enough to make a non-trivial contribution to execution time) have miss ratios close to
0% or 100%, then isolating dynamic misses is trivial-we simply apply the latency-tolerance technique only
to the static references which always suffer misses. In contrast, if the important references have intermediate
miss ratios (e.g., 50%), then we do not have sufficient information to distinguish which dynamic instances hit
or miss, since this information is lost in the course of summarizing the miss ratio. The current state-of-the-art
approach for dealing with intermediate miss ratios is to treat all static memory references with miss ratios
above or below a certain threshold as though they always miss or always hit, respectively [2]. However, this
all-or-nothing strategy will fail to hide latency when references are predicted to hit but actually miss, and
will induce unnecessary overhead when references are predicted to miss but actually hit. Rather than settling
for this sub-optimal performance, we would prefer to predict dynamic hits and misses more accurately.
1.1.1 Correlation Profiling
By exposing caching behavior directly to the user, informing memory operations [6] enable new classes of
lightweight profiling tools which can collect more sophisticated information than simply the per-reference
miss ratios. For example, cache misses can be correlated with information such as recent control-flow paths,
whether recent memory references hit or missed in the cache, etc., to help predict dynamic cache miss
behavior. We will refer to this approach as correlation profiling.
Figure
1 illustrates how correlation profiling information might be exploited. The load instruction shown
in
Figure
1 has an overall miss ratio of 50%. However, depending on the dynamic context of the load, we
may see more predictable behavior. In this example, contexts A and B result in a high likelihood of the
load missing, whereas contexts C and D do not. Hence we would like to apply a latency tolerance technique
within contexts A and B but not C or D.
The dynamic contexts shown in Figure 1 should be viewed simply as non-overlapping sets of dynamic
instances of the load which can be grouped together because they share a common distinguishable pattern. In
this paper, we consider three different types of information which can be used to distinguish these contexts.
The first is control-flow information-i.e. the sequence of N basic block numbers preceding the load. The
other two are based on sequences of cache access outcomes (i.e. hit or miss) for previous memory references:
self correlation considers the cache outcomes of the previous N dynamic instances of the given static reference,
and global correlation refers to the previous N dynamic references across the entire program. Note that
load M
Context C
Context D
Context A
Figure
1: Example of how correlating cache misses with the dynamic context may improve predictability.
(X=Y means X misses out of Y dynamic references.)
analogous forms of all three types of correlation profiling have been explored previously in the context of
branch prediction [4, 10, 15, 16]
1.2 Objectives and Overview
The goal of this paper is to determine whether correlation profiling can predict data cache misses more
accurately in non-numeric codes than summary profiling, and if so, can we translate this into significant
performance improvements by applying software-based latency tolerance techniques with greater precision.
We focus specifically on predicting load misses in this paper because load latency is fundamentally more
difficult to tolerate (store latency can be hidden through buffering and pipelining). Although we rely on
simulation to capture our profiling information in this study, correlation profiling is a practical technique
since it could be performed with relatively little overhead using informing memory operations [6].
The remainder of this paper is organized as follows. We begin in Section 2 by discussing the three different
types of history information that we use for correlation profiling, and in Section 3 we present a qualitative
analysis of the expected performance benefits. In Section 4, we present our experimental results which
quantify the performance advantages of correlation profiling in a collection of 22 non-numeric applications.
In addition, in Section 5, we report the memory-access behaviors of individual applications which explain
when and how correlation profiling is effective. In Section 6, we compare the performance of software
prefetching guided by summary and correlation profiling on a modern superscalar processor. Finally, we
discuss related work and present conclusions in Sections 7 and 8.
Profiling Techniques
In this section, we propose and motivate three new correlation profiling techniques for predicting cache
outcomes: control-flow correlation, self correlation, and global correlation.
2.1 Control-Flow Correlation
Our first profiling technique correlates cache outcomes with the recent control-flow paths. To collect this
information, the profiling tool maintains the N most recent basic block numbers in a FIFO buffer, and
matches this pattern against the hit/miss outcomes for a given memory reference. Intuitively, control-flow
correlation is useful for detecting cases where either data reuse or cache displacement are likely.
If we are on a path which leads to data reuse-either temporal or spatial-then the next reference is
likely to be a cache hit. Consider the example shown in Figure 2(a)-(b), where a graph is traversed by the
recursive procedure walk(). Any cyclic paths (e.g., A!B!C!D!A or P!Q!R!S!P) will result in
temporal reuse of p!data. In this example, control-flow correlation can potentially detect that if the last
four traversal decisions lead to a cycle (e.g., right, down, left, and up), then there is a high probability that
the next p!data reference will enjoy a cache hit.
Some control-flow paths may increase the likelihood of a cache miss by displacing a data line before it is
reused. For example, if the "x ? 0" condition is true in Figure 2(c), then the subsequent for loop is likely
struct node f
int data;
struct node *left, *right, *up, *down;
void walk(node* p) f
if (go left(p!data))
elsif (go right(p!data))
elsif (go
elsif (go
else
if (p != NULL) walk(p);
R
left right
up
for
(a) Code with data reuse (b) Example graph (c) Code with
cache displacement
Figure
2: Examples of how control-flow correlation can detect data reuse and cache displacement. (Control-
flow profiled loads are underlined.)
void preorder(treeNode* p) f
if (p != NULL) f
preorder(p!left);
preorder(p!right);
preorder
traversal
self cache
outcomes
of p->data
(a) Example Code (b) Tree constructed and traversed both in preorder
Figure
3: Example of using self-correlation profiling to detect spatial locality for p!data. (Consecutively
numbered nodes are adjacent in memory.)
to displace *p from the primary cache before it can be loaded again. Note that while paths which access
large amounts of data are obvious problems, the displacement might also be due to a mapping conflict.
2.2 Self Correlation
Under self correlation, we profile a load L by correlating its cache outcome with the N previous cache
outcomes of L itself. This approach is particularly useful for detecting forms of spatial locality which are
not apparent at compile time. For example, consider the case in Figure 3 where a tree is constructed in
preorder, assuming that consecutive calls to the memory allocator return contiguous memory locations, and
that a cache line is large enough to hold exactly two treeNodes. Depending on the traversal order (and the
extent to which the tree is modified after it is created), we may experience spatial locality when the tree
is subsequently traversed. For example, if the tree is also traversed in preorder, we will expect p!data to
suffer misses on every-other reference as cache line boundaries are crossed. Therefore despite the fact that
the overall miss ratio of p!data is 50% and the compiler would have difficulty recognizing this as a form of
spatial locality, self correlation profiling would accurately predict the dynamic cache outcomes for p!data.
2.3 Global Correlation
In contrast with self correlation, the idea behind global correlation is to correlate the cache outcome of a load
L with the previous N cache outcomes regardless of their positions within the program. The profiling tool
maintains this pattern using a single N-deep FIFO which is updated whenever dynamic cache accesses occur.
while (1) f
register int
register listNode*
while (curr != NULL) f
A
G
R
A->data
A->next
B->data
B->next
G->data
G->next
R->data
R->next .
A->data
A->next
B->data
B->next .
H .
H .
H .
memory
global
cache
outcomes
global
htab10(a) Example code (b) Hash table accesses
Figure
4: Example of using global-correlation profiling to detect bursty cache misses for curr!data.
Note that since earlier instances of L itself may appear in this global history pattern, global correlation may
capture some of the same behavior as self correlation (particularly in extremely tight loops).
Intuitively, global correlation is particularly helpful for detecting bursty patterns of misses across multiple
references. One example of this situation is when we move to a new portion of a data structure that has not
been accessed in a long time (and hence has been displaced from the cache), in which case the fact that the
first access to an object suffers a miss is a good indication that associated references to neighboring objects
will also miss. Figure 4 illustrates such a case where a large hash table (too large to fit in the cache) is
organized as an array of linked lists. In this case, we might expect a strong correlation between whether
htab[i] (the list head pointer) misses and whether subsequent accesses to curr!data (the list elements)
also miss. Similarly, if the same entry is accessed twice within a short interval (e.g., htab[10]), the fact that
the head pointer hits is a strong indicator that the list elements (e.g., A!data and B!data) will also hit.
In summary, by correlating cache outcomes with the context in which the reference occurs-e.g., the
surrounding control flow or the cache outcomes of prior references-we can potentially predict the dynamic
caching behavior more accurately than what is possible with summarized miss ratios.
Qualitative Analysis of Expected Benefits
Before presenting our quantitative results in later sections, we begin in this section by providing some
intuition on how correlation profiling can improve performance. A key factor which dictates the potential
performance gain is the ratio of the latency tolerance overhead (V ) to the cache miss latency (L). In the
extreme cases where V
there is no point in applying the latency tolerance technique (T )
selectively, since it either has no cost or no benefit. When
applying selectively may
be important.
Figure
5(a) illustrates how the average number of effective stall cycles per load (CPL) varies as a function
of V
L for various strategies for applying T . (Note that our CPL metric includes any overhead associated
with applying T , but does not include the single cycle for executing the load instruction itself.) If T is never
applied, then the CPL is simply mL, where m is the average miss ratio. At the other extreme, if we always
apply T , then the latency will always be hidden, but all references (even those that normally hit) will suffer
the hence the . Note that when V
is better to never apply T rather than
always applying it. Figure 5(b) shows an alternative view of CPL, where it is plotted as a function of m for
a fixed V
L . Again, we observe that the choice of whether to always or never apply T depends on the value of
m relative to V
L .
To achieve better performance than this all-or-nothing approach, we apply the same decision-making
process (i.e. comparing the miss ratio with V
L ) to more refined sets of loads. In the ideal case, we would
consider and optimize each dynamic reference individually (the resulting CPL of mV is shown in Figure 5).
However, since this is impractical for software-based techniques, we must consider aggregate collections of
references. Since summary profiling provides only a single miss ratio per static reference, the finest granularity
CPL never CPL single_action_per_load
ideal
multiple_actions_per_load
CPL
always
CPL
CPL single_action_per_load
ideal
multiple_actions_per_load
CPL
always
CPL never
(a) CPL vs. V
Figure
5: Illustration of the CPL for different approaches of applying a latency tolerance scheme
overall average load miss ratio, latency tolerance overhead, and load miss latency).
at which we can decide whether or not to apply T is once for all dynamic instances of a given static reference.
Figure
5 illustrates the potential shape of this "single action per load" curve, which is bounded by the cases
where T is never, always, and ideally applied. Since correlation profiling distinguishes different sets of
dynamic instances of a static load based on path information, it allows us to make decisions at a finer
granularity than with summary profiling. Therefore we can potentially achieve even better performance, as
illustrated by the "multiple actions per load" curve in Figure 5. (Further details on the actual CPL equations
for the summary and correlation profiling cases can be found in the Appendix)
Quantitative Evaluation of Performance Gains
In this section, we present experimental results to quantify the performance benefits offered by correlation
profiling. We begin by measuring and understanding the potential performance advantages for a generic
latency tolerance scheme. Later, in Section 6, we will focus on software-controlled prefetching as a specific
case study.
4.1 Experimental Methodology
We measured the impact of correlation profiling on the following 22 non-numeric applications: the entire
SPEC95 integer benchmark suite, the additional integer benchmarks contained in the SPEC92 suite, uniprocessor
versions of two graphics applications from SPLASH-2 [14], eight applications from Olden [11] (a suite
of pointer-intensive benchmarks), and the standard UNIX utility awk. Table 1 briefly summarizes these
applications, including the input data sets that were run to completion in each case, and Table 2 shows some
relevant dynamic statistics of these applications.
We compiled each application with -O2 optimization using the standard MIPS C compilers under
IRIX 5.3. We used the MIPS pixie utility [13] to instrument these binaries, and piped the resulting
trace into our detailed performance simulator. To increase simulation speed and to simplify our analysis,
we model a perfectly-pipelined single-issue processor (similar to the MIPS R2000) in this section. (Later, in
Section 6, we model a modern superscalar processor: the MIPS R10000).
To reduce the simulation time, our simulator performs correlation profiling only on a selected subset of
load instructions. Our criteria for profiling a load is that it must rank among the top 15 loads in terms
of total cache miss count, and its miss ratio must be between 10% and 90%. Using this criteria, we focus
only on the most significant loads which have intermediate miss ratios. We will refer to these loads as the
correlation-profiled loads. The fraction of dynamic load references in each application that is correlation
profiled is shown in Table 2.
Table
1: Benchmark characteristics.
Suite Name Description Input Data Set Cache Size
Integer perl Unix script language Perl train (scrabbl) 128 KB
go Computer game "Go" train 8 KB
ijpeg Graphic compression and decompression train 8 KB
vortex Database program train 8 KB
compress Compresses and decompresses file in memory train
li LISP interpreter train 8 KB
Integer espresso Minimization of boolean functions cps
eqntott Translation of boolean equations into truth tables int pri 3.eqn 8KB
raytrace Ray-tracing program car 4KB
radiosity Light distribution using radiosity method batch 8KB
Olden bh Barnes-Hut's N-body force-calculation 4K bodies 16KB
mst Finds the minimum spanning tree of a graph 512 nodes 8KB
perimeter Computes perimeters of regions in images 4K x 4K image 16KB
health Simulation of the Columbian health care system max. level = 5 16KB
tsp Traveling salesman problem 100,000 cities 8KB
bisort Sorts and merges bitonic sequences 250,000 integers 8KB
em3d Simulates the propagation of E.M. waves in a 3D object 2000 H-nodes, 32KB
100 E-nodes
voronoi Computes the voronoi diagram of a set of points 20,000 points 8KB
UNIX awk Unix script language AWK Extensive test of 32KB
Utilities AWK's capabilities
We attempt to maintain as much history information as possible for the sake of correlation. For control-flow
correlation, we typically maintained a path length of 200 basic blocks-in some cases this resulted in
such a large number of distinct paths that we were forced to measure only 50 basic blocks. For the self and
global correlation experiments, we maintained a path length of previous cache outcomes (either self or
global).
We focus on the predictability of a single level of data cache (two levels makes the analysis too compli-
cated). The choice of data cache size is important because if it is either too large or too small relative to the
problem size, predicting dynamic misses becomes too easy (they either always hit or always miss). Therefore
we would like to operate near the "knee" of the miss ratio curve, where predicting dynamic hits and misses
presents the greatest challenge. Although we could potentially reach this knee by altering the problem size,
we had greater flexibility in adjusting the cache size within a reasonable range. We chose the data cache size
as follows. We first used summary profiling to collect the miss ratios of all loads within the application on
different cache sizes ranging from 4KB to 128KB. We then chose the cache size which resulted in the largest
number of significant loads having intermediate miss ratios-these sizes are shown in Table 1. In all cases,
we model a two-way set-associative cache with lines.
4.2 Improvements in Prediction Accuracy and Performance
Figure
6 shows how the three correlation profiling schemes-control-flow (C), self (S), and global (G)-
improve the prediction accuracy of correlation-profiled loads. Each bar is normalized with respect to the
number of mispredicted references in summary profiling (P), and is broken down into two categories. The
top section ("Predict HIT / Actual MISS") represents a lost opportunity where we predict that a reference
hits (and thus do not attempt to tolerate its latency), but it actually misses. The "Predict MISS / Actual
HIT" section accounts for wasted overhead where we apply latency tolerance to a reference that actually
hits.
As discussed earlier in Section 3, our threshold for deciding whether to apply latency tolerance to a
reference is that its miss ratio must exceed V
is the latency tolerance overhead and L is the miss
latency. For summary profiling, this threshold is applied to the overall miss ratio of an instruction; for
correlation profiling, it is applied to groups of dynamic references along individual paths. Figure 6 shows
results with two values of V
summary profiling tends to apply latency tolerance
aggressively, thus resulting in a noticeable amount of wasted overhead. In contrast, for V
summary
profiling tends to be more conservative, thus resulting in many untolerated misses. Overall, correlation
Table
2: Dynamic benchmark statistics (the column "Insts" is the number of dynamic instructions, the
column "Loads" is the number of dynamic loads (its percentage out of "Insts" is also given), the column
"Load Miss Rate" is the data-cache miss rate of loads, the column "CP Loads" is the fraction of dynamic
loads that are correlation profiled, and the column "CP Load Misses" is the fraction of load misses that are
correlation profiled).
Suite Name Dynamic Statistics
Insts Loads Load Miss Rate CP Load Refs CP Load Misses
Integer perl 79M 15M (18%) 12.3% 21% 95%
go 568M 121M (21%) 7.1% 10% 23%
ijpeg 1438M 266M (18%) 2.7% 2% 17%
vortex 2838M 830M (29%) 3.3% 7% 48%
compress 39M 8M (20%) 3.9% 6% 87%
gcc 282M 61M (22%) 1.4% 2% 40%
li 228M 54M (24%) 4.0% 8% 73%
Integer espresso 560M 112M (20%) 2.2% 6% 70%
raytrace 2105M 588M (28%) 4.8% 10% 53%
radiosity 996M 236M (24%) 0.4% 1% 32%
Olden bh 2326M 667M (29%) 1.0% 3% 82%
mst 90M 14M (16%) 6.9% 17% 91%
perimeter 123M 17M (14%) 2.3% 5% 88%
health 8M 2M (25%) 9.0% 20% 84%
tsp 825M 239M (29%) 1.0% 1% 37%
bisort 732M 132M (18%) 2.5% 6% 74%
em3d 420M 73M (17%) 1.4% 4% 98%
voronoi 263M 87M (16%) 1.3% 4% 57%
UNIX Utilities awk 70M 9M (7%) 7.6% 16% 90%
profiling can significantly reduce both types of misprediction.
To quantify the performance impact of this increased prediction accuracy, Figure 7 shows the resulting
execution time of the four profiling schemes, assuming a cache miss latency of 50 cycles. Each bar is
normalized to the execution time without latency tolerance, and is broken down into four categories. The
bottom section is the busy time. The section above it ("Predict MISS / Actual MISS") is the useful overhead
paid for tolerating references that normally miss. The top two sections represent the misprediction penalty,
including wasted overhead ("Predict MISS / Actual HIT") and untolerated miss latency ("Predict HIT /
Actual MISS").
The degree to which improved prediction accuracy translates into reduced execution time 1 depends not
only on the relative importance of load stalls but also the fraction of loads that are correlation profiled. When
both factors are favorable (e.g., eqntott), we see large performance improvements-when either factor is
small (e.g., perimeter and tsp), the performance gains are modest despite large improvements in prediction
accuracies.
5 Case Studies
To develop a deeper understanding of when and why correlation profiling succeeds, we now examine a
number of the applications in greater detail. In addition to discussing the memory access patterns for these
applications, we also show the impact of the correlation-profiled loads on three performance metrics: the
miss ratio distribution, the stall cycles per load (CPL) due to correlation-profiled loads only, and the overall
I. While CPL and CP I measure the impacts on execution time, the miss ratio distribution gives us
insight into how effectively correlation profiling has isolated the dynamic hit and miss instances of static
load instructions.
failing to hide a miss is more expensive than wasting overhead, it is possible to improve performance by replacing more
expensive with less expensive mispredictions, even if the total misprediction count increases (e.g., raytrace with control-flow
correlation when V
Predict MISS / Actual HIT
Predict HIT Actual MISS
||||||Normalized
Misprediction
awk (200 basic blocks)
||||||Normalized
Misprediction
bh (200 basic blocks)
||||||Normalized
Misprediction
bisort (200 basic blocks)
||||||Normalized
Misprediction
compress (200 basic blocks)
||||||Normalized
Misprediction
em3d (200 basic blocks)
||||||Normalized
Misprediction
eqntott (50 basic blocks)
||||||Normalized
Misprediction
espresso (200 basic blocks)
||||||Normalized
Misprediction
gcc (200 basic blocks)
||||||Normalized
Misprediction
100 104 108
100 100
93
go (50 basic blocks)
||||||Normalized
Misprediction
health (50 basic blocks)
||||||Normalized
Misprediction
ijpeg (100 basic blocks)
||||||Normalized
Misprediction
li (100 basic blocks)
||||||Normalized
Misprediction
||||||Normalized
Misprediction
22 10872
28 2215191001410019 25
mst (200 basic blocks)
||||||Normalized
Misprediction
perimeter (200 basic blocks)
||||||Normalized
Misprediction
71 7078
perl (200 basic blocks)
||||||Normalized
Misprediction
radiosity (200 basic blocks)
||||||Normalized
Misprediction
raytrace (50 basic blocks)
||||||Normalized
Misprediction
sc (200 basic blocks)
||||||Normalized
Misprediction
tsp (200 basic blocks)
||||||Normalized
Misprediction
voronoi (200 basic blocks)
||||||Normalized
Misprediction
vortex (200 basic blocks)
Figure
Number of mispredicted correlation-profiled loads, normalized to summary profiling summary
profiling, control-flow correlation, global correlation). Maximum path
lengths used in control-flow correlation are indicated next to the benchmark names.
5.1 li
Over half of the total load misses are caused by two pointer dereferences: this!n flags in mark(), and
p!n flags in sweep(), as illustrated by the pseudo-code in Figure 8.
The access patterns behave as follows. The procedure mark() traverses a binary tree through the three
while loops shown in Figure 8(a). Starting at a particular node, the first inner while loop continues
descending the tree-choosing either the left or right child as it goes-until it reaches either a marked node
or a leaf node. At this point, we then backup to a node where we can continue descending through a search
Predict HIT Actual MISS
Predict MISS Actual HIT
Predict MISS Actual MISS
Busy
||||||Normalized
Exec
Time 88 87 85 85
100 95 95 95
awk (200 basic blocks)
||||||Normalized
Exec
Time 97 97 97 97 99 99 99 99
bh (200 basic blocks)
||||||Normalized
Exec
Time 94 92 92 92 100 96 96 96
bisort (200 basic blocks)
||||||Normalized
Exec
Time 86 84 86 86
95 90 95 95
compress (200 basic blocks)
||||||Normalized
Exec
Time 96 96 94 95
em3d (200 basic blocks)
||||||Normalized
Exec
Time 94
eqntott (50 basic blocks)
||||||Normalized
Exec
Time 96 95 92 93 99 98 96 97
espresso (200 basic blocks)
||||||Normalized
Exec
Time
gcc (200 basic blocks)
||||||Normalized
Exec
Time 96 94 95 95 99 99 99 99
go (50 basic blocks)
||||||Normalized
Exec
Time 81 79 78 79
95 94 94 94
health (50 basic blocks)
||||||Normalized
Exec
Time
ijpeg (100 basic blocks)
||||||Normalized
Exec
Time 92 88 86 88
100 96 94 97
li (100 basic blocks)
||||||Normalized
Exec
Time
||||||Normalized
Exec
Time 86 80 79 7988 86 87
mst (200 basic blocks)
||||||Normalized
Exec
Time 95 94 93 93 100 98 96 97
perimeter (200 basic blocks)
||||||Normalized
Exec
Time
72 70 69 69
93
perl (200 basic blocks)
||||||Normalized
Exec
Time
radiosity (200 basic blocks)
||||||Normalized
Exec
Time 90
raytrace (50 basic blocks)
||||||Normalized
Exec
Time
91 87 87 89
sc (200 basic blocks)
||||||Normalized
Exec
Time 97 94 93 94 98 97 96 97
tsp (200 basic blocks)
||||||Normalized
Exec
Time
voronoi (200 basic blocks)
||||||Normalized
Exec
Time 93
vortex (200 basic blocks)
Figure
7: Impact of the profiling schemes on execution time, assuming a 50 cycle miss latency (L).
summary profiling, control-flow correlation,
performed by the second inner while loop. The tree is allocated in preorder, similar to the one shown
in
Figure
3, except much larger. Therefore we enjoy spatial locality as long as we continue following left
branches in the tree, but spatial locality is disrupted whenever we backup in the second inner while loop,
as illustrated by Figure 8(c).
All three types of correlation profiling provide better cache outcome predictions than summary profiling
for the this!n flags reference in mark() for li. Self correlation detects this form of spatial locality
effectively. Global correlation is more accurate than summary profiling but less accurate than self correlation
in this case because the cache outcomes of other references (which do not help to predict this reference)
consume wasted space in the global history pattern. Control-flow correlation also performs well because it
void mark(NODE *ptr) f
while (TRUE) f /* outer while loop */
while (TRUE) f/* 1st inner while loop */
if (this!n flags & MARK)
break; /* a marked node */
else f
else if (livecdr(this)) f
right */
gelse break; /* a leaf node* /
/* ends if-else */
/* ends 1st inner-while */
while (TRUE) f/* 2nd inner while loop */
/* backup to a point where we
can continue descending */
/* ends 2nd inner while */
1st outer while */
(a) Procedure mark()
for
for
if (!(p!n flags &
(b) Procedure sweep()
tree pointer
(c) Tree traversal order in mark()
Figure
8: Procedures mark() and sweep() in li, and the memory access patterns of mark(). (Note:
consecutively numbered nodes in part (c) correspond to adjacent addresses in memory.)
observes that this!n flags is more likely to suffer a miss if we begin iterating in the first inner while loop
immediately following a backup performed in the second inner while loop (in the preceding outer while
loop iteration).
Finally, the reference p!n flags in sweep() (shown in Figure 8(b)) is in fact an array reference written
in pointer form. Both self correlation and global correlation detect the spatial locality caused by accessing
consecutive elements within the array. (Although the compiler could potentially recognize this spatial locality
through static analysis if it can recognize that p!n flags is effectively an array reference, this is not always
possible for all such cases.)
Figure
9 shows the detailed performance results for li. The miss ratio distribution in Figure 9(a) has
ten ranges of miss ratios, each of which contains four bars corresponding to the fraction of total dynamic
correlation-profiled load references that fall within this range. The bars for summary profiling represent
the inherent miss ratios of these load instructions, and the other three cases represent the degree to which
correlation profiling can effectively group together dynamic instances of the loads into separate paths with
similar cache outcome behavior. For a correlation scheme to be effective, we would like to see a "U-shaped"
distribution where references have been isolated such that they always have very high or very low miss
ratios-we refer to such a case as being strongly biased. In contrast, if most of the references are clustered
around the middle of the distribution, we say that this is weakly biased. Correlation profiling can outperform
summary profiling by increasing the degree of bias, which we do observe in Figure 9(a). With summary
profiling, 80% of the loads that we profile 2 have miss ratios in the range of 30-50% (these include the
this!n flags and p!n flags references shown earlier in Figure 8). In contrast, with self correlation
2 Recall that we only profile loads with miss ratios between 10% and 90% among the top 15 ranked loads in terms of their
contributions to total misses. Therefore the summary profiling case will never have loads outside of this miss ratio range.
CPL
summary
global
control-flow
self
CPI
summary
global
control-flow
self
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
9: Detailed performance results for li.
profiling only 27% of the isolated loads have miss ratios in the 30-50% range, and over 45% are either below
10% or above 90%. All three correlation schemes increase the degree of bias in this case.
This increased degree of bias of correlation-profiled loads translates into a reduction in CPL, as shown in
Figure
9(b) where the CPL due to correlation-profiled loads is plotted over a range of overhead-to-latency
assuming a miss latency of 50 cycles. As we have discussed in Section 3, correlation profiling
partially closes the gap between summary profiling and ideal prediction. The overall CP I is also shown in
Figure
9(c).
5.1.1 eqntott
Figure
shows detailed performance results for eqntott, where we see that all three forms of correlation
profiling successfully increase the degree of bias and reduce CPL (and hence CP I). We now focus on
the memory access behavior. Most of the load misses are caused by the four loads in cmppt() shown in
Figure
11(a), two of which are array references (a ptand[i] and b ptand[i]). Clearly the spatial locality
enjoyed by these two array references can be detected through self correlation (and hence global correlation).
However, the access patterns of the other two loads (a[0]!ptand and b[0]!ptand) are more complicated.
The procedure cmppt() has multiple call sites, and two of them, say S 1 and S 2 , invoke it very frequently.
Whenever cmppt() is called at S 1 , a[0] will very likely be unchanged but b[0] will have a new value. In
contrast, whenever cmppt() is called at S 2 , b[0] will very likely be unchanged but a[0] will have a new
value. Moreover, both S 1 an S 2 repeatedly call cmppt(). This call-site dependent behavior results in the
streams of cache outcomes illustrated in Figure 11(b). Self correlation captures these streaming behavior,
and control-flow correlation also predicts the cache outcomes accurately by distinguishing the two call sites
of cmppt().
The cache outcomes of a[0]!ptand also help predict those of a ptand[i]-if a[0]!ptand is a hit, it
implies that the array a ptand[] has been loaded recently, and therefore the a ptand[i] references are likely
to also hit. (Similar correlation also exists between b[0]!ptand and b ptand[i]). Hence global correlation
is quite effective in this case. Control-flow correlation also predicts the cache outcomes of a ptand[i] and
CPL
summary
control-flow
global
self
ideal 11.21.41.6
CPI
summary
control-flow
global
self
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
10: Detailed performance results for eqntott.
extern int ninputs, noutputs;
int cmppt (a, b)
PTERM *a[], *b[]; f
register int i, aa, bb;
register int* a ptand, *b ptand;
a
for
/* the famous correlated branches */
return (0);
a[0]->ptand b[0]->ptand
(a) Procedure cmppt() which causes (b) Call-site dependent
most load misses cache outcome patterns
Figure
11: The memory access behavior in eqntott. To make all loads explicit, we rewrite the two expressions
a[0]!ptand[i] and b[0]!ptand[i] in the original cmppt() into the four loads (i.e. a[0]!ptand,
a ptand[i], b[0]!ptand, and b ptand[i]) shown in (a).
b ptand[i] in an indirect fashion, by virtue of predicting those of a[0]!ptand and b[0]!ptand.
CPL
summary
control-flow
global
self
ideal 0.981.021.061.11.141.18
CPI
summary
control-flow
global
self
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
12: Detailed performance results for perimeter.161
middle_right
left right
middle_leftMore spatial locality
found at the bottom
void middle first(quadTree* p) f
if (p == NULL)
return;
middle first(p!middle left);
middle first(p!middle right);
middle
middle
(a) A quadtree allocated in preorder (b) Code for traversing the quadtree in (a)
Figure
13: Example of a case where more spatial locality is found at the bottom of a tree. This example
assumes that one cache line can hold three tree nodes and the tree is allocated in preorder. Nodes having
consecutive numbers are adjacent in the memory.
5.1.2 perimeter and bisort
Figure
12 shows the detailed performance results for perimeter. The main data structures used in both
perimeter and bisort are trees: quadtrees in perimeter, and binary trees in bisort. These trees are
allocated in preorder, but the orders in which they are traversed are rather arbitrary. As a result, we do
not see very regular cache outcome patterns (such as the one illustrated in Figure 3) for these applications.
CPL
summary
control-flow
global
self
CPI
summary
control-flow
global
self
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
14: Detailed performance results for mst.
Nevertheless, there is still a considerable amount of spatial locality among consecutively accessed nodes while
we are traversing around the bottom of a tree that has been allocated in preorder. For example, if we traverse
a quadtree using the procedure middle first() shown in Figure 13, we will only miss twice upon accessing
nodes 156 through 160 at the tree's bottom, assuming that nodes 156 through 158 are in one cache line
and nodes 159 through 161 are in another. In contrast, there is relatively little spatial locality while we are
traversing the middle of the tree. Self correlation (and hence global correlation) can discover whether we
are currently in a region of the tree that enjoys spatial locality. Control-flow correlation can also potentially
detect whether we are close to the bottom of the tree by noticing the number of levels of recursive descent.
5.1.3 mst
Most of the misses in mst (see the detailed performance results in Figure 14) are caused by loads in
HashLookup() and the tmp!edgehash load in BlueRule(), as illustrated in Figure 15. The mst application
consists of two phases: a creation phase and a computation phase. Both phases invoke HashLookup(), but
the creation phase causes most of the misses when it calls HashLookup() to check whether a key already
exists in the hash table before allocating a new entry for it. During the computation phase, much of the data
has already been brought into the cache, and hence there are relatively few misses. Both self correlation and
global correlation accurately predict the cache outcomes of these two distinct phases, since they appear as
repeated streams of either hits or misses. Control-flow correlation is also effective since it can distinguish
the call chains which invoke HashLookup().
The load of tmp!edgehash in BlueRule() accesses a linked lists whose nodes are in fact allocated at
contiguous memory locations. Consequently, self correlation detects this spatial locality accurately, but
control-flow correlation is not helpful.
void *HashLookup(int key, Hash hash) f
int j;
HashEntry ent;
ent && ent!key !=key;
if (ent) return ent!entry;
return NULL;
static BlueReturn BlueRule(.) f
for (tmp=vlist!next; tmp;
prev=tmp,tmp=tmp!next) f
Figure
15: Pseudo codes drawn from mst.
||||||||||||%
of
Total
Correlation-Profiled
Miss Ratio
control-flow
global
self
(a) Miss ratio distribution of correlation-profiled load references0.050.150.250.350
CPL
summary
control-flow
global
self
ideal
CPI
summary
control-flow
global
self
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
Detailed performance results for raytrace.
5.1.4 raytrace and tsp
In raytrace (refer to Figure 16 for its performance results), over 30% of load misses are caused by the
pointer dereference of tmp!bv in prims in box2() (see Figure 17). In subdiv bintree(), the two calls to
prims in box2() copy part of the array pe of the current node btn to the arrays btn1!pe and btn2!pe,
where btn1 and btn2 are the children of btn. This process of copying pe is performed recursively on the
whole tree by create bintree(). As a result, when prims in box2() is called upon a node n, we may have
used all values in the array pe (referred to as pepa in prims in box2()) of n before at some antecedent of
n and hence hopefully most data loaded by tmp!bv is already in the cache. In this case, most references
of tmp!bv will hit in the cache. In contrast, if the values in pepa are new, all tmp!bv references will
miss. Hence self correlation captures these streams of hits and streams of misses. In theory, control-flow
correlation could also achieve good predictions by observing whether any copying occurred in the parent
node-unfortunately, the profiling tool cannot record enough state across the many control-flow changes in
subdiv bintree() and prims in box2() to know what decisions were made in the parent node.
ELEMENT *prims in box2(pepa, .)f
/* computes ovlap */
/* no change in pepa[j] */
if (ovlap == 1) f
return (npepa);
VOID subdiv bintree(BTNODE* btn, .)f
/* btn1 and btn2 are btn's children */
prims in box2(btn!pe, .);
prims in box2(btn!pe, .);
VOID create bintree(BTNODE* root, .)f
if (.) f
subdiv bintree(root, .);
create bintree(root!btn[0], .);
create bintree(root!btn[1], .);
Figure
17: Pseudo codes drawn from raytrace.
Tree tsp(Tree t,int sz, .) f
if (t!size != sz) return conquer(t);
return merge(leftval, rightval, t, .);
static Tree conquer(Tree t) f
for (; l; l=donext) f
Figure
codes drawn from tsp. Procedure makelist(Tree t) slings t into a list consisting of
all nodes of t.
Similar to raytrace, tsp also traverses a binary tree recursively, and some data which is read by the current
node will be read again by its descendents. As illustrated in Figure 18, the procedure tsp() recursively
traverses the tree t and calls conquer(t) if the size of t is not greater than sz. The procedure conquer(t)
uses makelist(t) to sling every node of t into a list which is then traversed by the for loop. Therefore
since all descendents of t are brought into the cache whenever conquer(t) is called, subsequent recursion
down t!left and t!right within tsp() results in many cache hits. Hence the l!data references either
mainly hit or mainly miss for a given node t. Self correlation captures this pattern effectively. Control-flow
correlation is also quite effective because it can observe the number of times conquer() has been called in
a given recursive descent-most misses occur the first time it is invoked.
5.1.5 voronoi and compress
Control-flow correlation offers the best prediction accuracy in both of these applications. Most of the
misses in voronoi are caused by loading b!next in splice(), which is called from three different places in
do merge(), as illustrated in Figure 20(a). When splice() is called from call site 1, b!next will hit since
ldi!next loaded this same data into the cache just prior to the call. When splice() is called from the
other two call sites, b!next is more likely to miss. Hence control-flow correlation distinguishes the behavior
of these different call sites accurately. Self correlation is less effective since b!next does not have regular
cache outcome patterns.
In compress (see Figure 19 for its performance results), roughly half of the misses are caused by the hash
of
Total
Correlation-Profiled
Refs summary4
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Miss Ratio
control-flow
global
self
(a) Miss ratio distribution of correlation-profiled load references0.050.150.250.35
CPL
summary
global
self
control-flow
CPI
summary
global
self
control-flow
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
19: Detailed performance results for compress.
table access htabof[i] in the procedure compress() (see Figure 20(b)). The index i to the hash table htab
is a function of the combination of the prefix code ent and the new character c. If this combination has
been seen before, the hash probe test ((htab[i] == fcode)) will be true-if it has been seen recently, the
load of htab[i] is likely to hit in the cache. Since the input file we use (provided by SPEC) is generated
from a frequency distribution of common English texts, some strings will appear more often than others.
Because of this, we expect that the condition (htab[i] == fcode) should be true quite frequently once
many common strings have been entered into htab. If the last few tests of (htab[i] == fcode) are false,
the probability that the next one is true will be high, which also implies that the next reference of htab[i]
is more likely a hit. Therefore, control-flow correlation can make accurate predictions by examining the last
several outcomes of this branch.
5.1.6 espresso, vortex, m88ksim, and go
For these four applications, correlation profiling mainly improves the cache outcome predictions for array
references. In espresso (see Figure 21 for its detailed performance results), many load misses are due to
array references, written in pointer form, with variable strides. Figure 22(a) shows one such example. Inside
the for loop, p is incremented by BB!wsize, whose value depends on the call chain of setup BB CC() and
ranges from 4 to 24 bytes. Different values result in different degrees of spatial locality, but all can be
captured by self correlation (and hence global correlation). Control-flow correlation can also make enhanced
predictions by exploiting the call-chain information.
In vortex, m88ksim, and go, many load misses are caused by array references located inside procedures,
where array indices are passed as procedure parameters. See Figure 22(b) for an example drawn from
vortex. Each of these procedures have multiple call sites, and the cache outcomes of those array references
are mainly call-site dependent. This explains why control-flow correlation offers the highest cache outcome
prediction accuracy for these three benchmarks. In vortex, the array index parameter values at a given
call are very close or even identical most of the time, but values passed at different call sites are quite
different. Consequently, references made through the same call sites will enjoy temporal and/or spatial
EDGE PAIR do merge(.) f
call site 1*/
/* no dereferences of ldj before */
call site 2*/
/* no dereferences of ldk before */
call site 3*/
splice(QUAD EDGE a, QUAD EDGE b) f
while
if (htab[i] == fcode) f
else f
. /* store fcode into htab */ .
(a) Code fragment in voronoi (b) Code fragment in compress
Figure
20: Pseudo codes drawn from (a) voronoi and (b) compress.
locality, but those made through different call sites will not. Since a procedure is usually invoked multiple
times by the same call site before being invoked by another call site, this results in a streaming pattern of a
miss followed by several hits-hence self correlation also performs well in vortex by capturing these cache
outcome patterns.
5.2 Lessons Learned from All Case Studies
Although global correlation makes excellent predictions in some cases by correlating behavior across different
load instructions (e.g., eqntott), in most cases it essentially assimilates self correlation, but does not perform
quite as well since it records less history for a given load. Self correlation is often successful since it recognizes
forms of spatial locality which are not recognizable at compile time (e.g, li, perimeter, bisort, and mst),
and also long runs of either all hits or all misses (e.g., eqntott, mst, tsp, and raytrace). We often find that
as few as four previous cache outcomes per reference are sufficient to achieve good predictability with self
correlation. By capturing call chain information, control-flow correlation can distinguish behavior based on
call sites (e.g., eqntott, espresso, vortex, m88ksim, go, mst and voronoi) and the depth of the recursion
while traversing a tree (e.g., perimeter, bisort, and tsp).
Roughly half of the applications enjoy significant improvements from both control-flow and self correlation,
and in many of these cases we observe that the same load references can be successfully predicted by both
forms of correlation. This is good news, since control-flow correlation profiling is the easiest case to exploit
in practice by using procedure cloning [5] to distinguish call-chain dependent behavior.
6 Applying Correlation Profiling to Prefetching
To demonstrate the practicality of correlation profiling, we used both summary and correlation profiling to
guide the manual insertion of prefetch instructions into three applications: (eqntott, tsp, and raytrace).
In the case of correlation profiling, we used procedure cloning [5] to isolate different dynamic instances of a
static reference, and adapted the prefetching strategy accordingly with respect to the call sites. We assumed
that V
deciding whether to insert prefetches, 3 and we performed fully-detailed simulations of a
processor similar to the MIPS R10000 [8] (details of the memory hierarchy are shown in Figure 23(a)).
3 We assume an average prefetch overhead (V ) of two cycles, and an average miss latency (L) of 20 cycles.
of
Total
Correlation-Profiled
Refs summary26
Miss Ratio
control-flow
global
self
(a) Miss ratio distribution of correlation-profiled load references0.020.060.10.140
CPL
summary
control-flow
global
self
ideal
CPI
summary
control-flow
global
self
ideal
(b) CPL due to correlation-profiled loads (c) Overall CP I
Figure
21: Detailed performance results for espresso.
void setup BB CC(pcover BB,
pcover CC)f
last=p+BB!count*BB!wsize;
boolean ChkGetChunk(numtype ChunkNum, .) f
&& .
(a) Code fragment in espresso (b) Code fragment in vortex
Figure
22: Pseudo codes drawn from (a) espresso and (b) vortex.
Figure
23(b) shows the resulting execution times, normalized to the case without prefetching. For these
applications, summary-profiling directed prefetching actually hurts performance due to the overheads of
unnecessary prefetches. In contrast, correlation profiling provides measurable performance improvements by
isolating dynamic hits and misses more effectively, thereby achieving similar benefits with significantly less
overhead. We would also like to point that these numbers do not represent the limit of what correlation
can achieve. For example, with an 8KB primary data cache, correlation profiling offers a 10% speedup over
summary profiling in the case of eqntott.
7 Related Work
Abraham et al. [2] investigated using summary profiling to associate a single latency tolerance strategy (i.e.
either attempt to tolerate the latency or not) with each profiled load. They used this approach to reduce
Memory Parameters for the MIPS R10000 Simulator
Primary Instr and Data Caches 32KB, 2-way set-assoc.
Unified Secondary Cache 2MB, 2-way set-assoc.
Line Size 32B
Primary-to-Secondary 12 cycles
Miss Latency
Primary-to-Memory
Miss Latency
Data Cache Miss 8
Handlers (MSHRs)
Data Cache Banks 2
Data Cache Fill Time 4 cycles
(Requires Exclusive Access)
Main Memory Bandwidth 1 access per 20 cycles
||||||Normalized
Exec.
Time
load stall113
Eqntott Tsp Raytrace
store stall
inst stall
busy
(a) Memory Parameters (b) Execution Time
Figure
23: Impact of correlation profiling on prefetching performance no prefetching, prefetching
directed by summary profiling, prefetching directed by correlation profiling).
the cache miss ratios of nine SPEC89 benchmarks, including both integer and floating-point programs. In
a follow-up study [1], they also report the improvement in effective cache miss ratio. In contrast with this
earlier work, our study has focused on correlation profiling, which is a novel technique that provides superior
prediction accuracy relative to summary profiling.
Ammons et al.[3] used path profiling techniques to observe that a large fraction of primary data cache
misses in the SPEC95 benchmarks occur along a relatively small number of frequently executed paths.
The three forms of correlation explored in this study (control-flow, self, and global) were inspired by earlier
work on using correlation to enhance branch prediction accuracies [4, 10, 15, 16]. While branch outcomes
and cache access outcomes are quite different, it is interesting to observe that correlation-based prediction
works well in both cases.
Conclusions
To achieve the full potential of software-based latency tolerance techniques, we have proposed correlation
profiling, which is a technique for isolating which dynamic instances of a static memory reference are likely to
suffer cache misses. We have evaluated the potential performance benefits of three different forms of correlation
profiling on a wide variety of non-numeric applications. Our experiments demonstrate that correlation
profiling techniques always outperform summary profiling by increasing the degree of bias in the miss ratio
distribution, and this improved prediction accuracy can translate into significant reductions in the memory
stall time for roughly half of the applications we study. Detailed case studies of individual applications show
that self correlation works well because the cache outcome patterns of individual references often repeat in
predictable ways, and that control-flow correlation works mainly because many cache outcomes are call-chain
dependent. Although global correlation offers superior performance in some cases, for the most part it mainly
assimilates self correlation. Finally, we observe that correlation profiling offers superior performance over
summary profiling when prefetching on a superscalar processor. We believe that these promising results may
lead to further innovations in optimizing the memory performance of non-numeric applications.
Appendix
Derivation of the Stall Cycles Per Load (CPL) under
Five Latency-Tolerance Schemes
Denote the CPL under a particular tolerance scheme S by CPL S . Let CPL i
S be the CPL S of load i in the
program and f i be the fraction of references made by load i out of the total references of all loads. Then:
S \Theta f i (1)
Let L be the cycles stalled upon a load miss, V be the overhead of applying the latency-tolerance technique
T to a load reference, m i is miss ratio of load i and m is the overall miss ratio of all loads.
load reference is stalled only when it is a cache miss, so:
fully tolerates the latencies of all load references but always incurs the overhead, so:
CPL single action per load : The miss ratio m i decides whether T should be applied to load i:
single action per load =
ae
L (i.e. not apply T)
otherwise (i.e. apply T) (4)
CPL single action per load =
single action per load \Theta f i
single action per load \Theta f i
where A is the set of loads with miss ratios ? V
L and NA is the set of loads with miss ratios - V
L .
CPL multiple actions per load : T is only applied to references of load i that belong to contexts with miss
L . The formula for CPL i
multiple actions per load can be simply obtained adding an extra level
to Equation (5) to capture the notion of contexts within load i. That is:
multiple actions per load
where A i is the set of contexts of load i with miss ratios ? V
is the set of contexts of load
i of miss ratios - V
is the miss ratio of context j of load i, and f i;j is the fraction of references
of load i that are on context j. CPL multiple actions per load can be obtained by substituting
multiple actions per load into Equation (1).
load-miss latencies are fully tolerated and the overhead is only incurred
to miss references:
--R
Predicting load latencies using cache profiling.
Predictability of load/store instruction latencies.
Exploiting hardware performance counters with flow and context sensitive profiling.
Branch classification: a new mechanism for improving branch predictor performance.
A methodology for procedure cloning.
Informing memory operations: Providing memory performance feedback in modern processors.
MIPS Technologies Inc.
Design and evaluation of a compiler algorithm for prefetching.
Improving the accuracy of dynamic branch prediction using branch correlation.
Supporting dynamic data structures on distributed memory machines.
Software support for speculative loads.
Tracing with pixie.
The SPLASH-2 programs: Characterization and methodological considerations
A comparison of dynamic branch predictors that use two levels of branch history.
Improving the accuracy of static branch prediction using branch correlation.
--TR
Software support for speculative loads
Design and evaluation of a compiler algorithm for prefetching
Improving the accuracy of dynamic branch prediction using branch correlation
A comparison of dynamic branch predictors that use two levels of branch history
Branch classification
Improving the accuracy of static branch prediction using branch correlation
Supporting dynamic data structures on distributed-memory machines
The SPLASH-2 programs
Informing memory operations
Predictability of load/store instruction latencies
Exploiting hardware performance counters with flow and context sensitive profiling
--CTR
Aleksandar Milenkovic, Achieving High Performance in Bus-Based Shared-Memory Multiprocessors, IEEE Concurrency, v.8 n.3, p.36-44, July 2000
Craig Zilles , Gurindar Sohi, Execution-based prediction using speculative slices, ACM SIGARCH Computer Architecture News, v.29 n.2, p.2-13, May 2001
T. K. Tan , A. K. Raghunathan , G. Lakishminarayana , N. K. Jha, High-level software energy macro-modeling, Proceedings of the 38th conference on Design automation, p.605-610, June 2001, Las Vegas, Nevada, United States
Young , Michael D. Smith, Better global scheduling using path profiles, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.115-123, November 1998, Dallas, Texas, United States
Jeffrey Dean , James E. Hicks , Carl A. Waldspurger , William E. Weihl , George Chrysos,
ProfileMe
Craig B. Zilles , Gurindar S. Sohi, Understanding the backward slices of performance degrading instructions, ACM SIGARCH Computer Architecture News, v.28 n.2, p.172-181, May 2000
Adi Yoaz , Mattan Erez , Ronny Ronen , Stephan Jourdan, Speculation techniques for improving load related instruction scheduling, ACM SIGARCH Computer Architecture News, v.27 n.2, p.42-53, May 1999
Chi-Hung Chi , Jun-Li Yuan , Chin-Ming Cheung, Cyclic dependence based data reference prediction, Proceedings of the 13th international conference on Supercomputing, p.127-134, June 20-25, 1999, Rhodes, Greece
Abhinav Das , Jiwei Lu , Howard Chen , Jinpyo Kim , Pen-Chung Yew , Wei-Chung Hsu , Dong-Yuan Chen, Performance of Runtime Optimization on BLAST, Proceedings of the international symposium on Code generation and optimization, p.86-96, March 20-23, 2005
Young , Michael D. Smith, Static correlated branch prediction, ACM Transactions on Programming Languages and Systems (TOPLAS), v.21 n.5, p.1028-1075, Sept. 1999
Martin Burtscher , Amer Diwan , Matthias Hauswirth, Static load classification for improving the value predictability of data-cache misses, ACM SIGPLAN Notices, v.37 n.5, May 2002
Jaydeep Marathe , Frank Mueller , Tushar Mohan , Bronis R. de Supinski , Sally A. McKee , Andy Yoo, METRIC: tracking down inefficiencies in the memory hierarchy via binary rewriting, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Chi-Keung Luk, Tolerating memory latency through software-controlled pre-execution in simultaneous multithreading processors, ACM SIGARCH Computer Architecture News, v.29 n.2, p.40-51, May 2001
Tor M. Aamodt , Paul Chow, Optimization of data prefetch helper threads with path-expression based statistical modeling, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington
Jiwei Lu , Howard Chen , Rao Fu , Wei-Chung Hsu , Bobbie Othmer , Pen-Chung Yew , Dong-Yuan Chen, The Performance of Runtime Data Cache Prefetching in a Dynamic Optimization System, Proceedings of the 36th annual IEEE/ACM International Symposium on Microarchitecture, p.180, December 03-05,
Chi-Keung Luk , Robert Muth , Harish Patil , Richard Weiss , P. Geoffrey Lowney , Robert Cohn, Profile-guided post-link stride prefetching, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA
Jaydeep Marathe , Frank Mueller , Tushar Mohan , Sally A. Mckee , Bronis R. De Supinski , Andy Yoo, METRIC: Memory tracing via dynamic binary rewriting to identify cache inefficiencies, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.2, p.12-es, April 2007
Characterizing the memory behavior of Java workloads: a structured view and opportunities for optimizations, ACM SIGMETRICS Performance Evaluation Review, v.29 n.1, p.194-205, June 2001
Mark Horowitz , Margaret Martonosi , Todd C. Mowry , Michael D. Smith, Informing memory operations: memory performance feedback mechanisms and their applications, ACM Transactions on Computer Systems (TOCS), v.16 n.2, p.170-205, May 1998 | non-numeric applications;profiling;latency tolerance;correlation;cache miss prediction |
266833 | Cache sensitive modulo scheduling. | This paper focuses on the interaction between software prefetching (both binding and nonbinding) and software pipelining for VLIW machines. First, it is shown that evaluating software pipelined schedules without considering memory effects can be rather inaccurate due to stalls caused by dependences with memory instructions (even if a lockup-free cache is considered). It is also shown that the penalty of the stalls is in general higher than the effect of spill code. Second, we show that in general binding schemes are more powerful than nonbinding ones for software pipelined schedules. Finally, the main contribution of this paper is an heuristic scheme that schedules some memory operations according to the locality estimated at compile time and other attributes of the dependence graph. The proposed scheme is shown to outperform other heuristic approaches since it achieves a better trade-off between compute and stall time than the others. | Introduction
Software pipelining is a well-known loop scheduling technique that tries to exploit instruction
level parallelism by overlapping several consecutive iterations of the loop and executing them in
parallel ([14]).
Different algorithms can be found in the literature for generating software pipelined sched-
ules, but the most popular scheme is called modulo scheduling. The main idea of this scheme is to
find a fixed pattern of operations (called kernel or steady state) that consists of operations from
distinct iterations. Finding the optimal scheduling for a resource constrained scenario is an NP-complete
problem, so practical proposals are based on different heuristic strategies. The key goal
of these schemes has been to achieve a high throughput (e.g. [14][11][20][18]), to minimize register
pressure (e.g. [9][6]) or both (e.g. [10][15][7][16]), but none of them has evaluated the effect
of memory. These schemes assume a fixed latency for all memory operations, which usually corresponds
to the cache-hit latency.
Lockup-free caches allows the processor not to stall on a cache miss. However, in a VLIW
architecture the processor often stalls afterwards due to true dependences with previous memory
operations. The alternative of scheduling all loads using the cache-miss latency requires considerable
instruction level parallelism and increases register pressure ([1]).
Software prefetching is an effective technique to tolerate memory latency ([4]). Software
prefetching can be performed through two alternative schemes: binding and nonbinding prefetch-
ing. The first alternative, also known as early scheduling of memory operations, moves memory
instructions away from those instructions that depend on them. The second alternative introduces
in the code special instructions, which are called prefetch instructions. These are nonfaulting
instructions that perform a cache lookup but do not modify any register.
These alternative prefetching schemes have different drawbacks:
. The binding scheme increases the register pressure because the lifetime of the value produced
by the memory operation is stretched. It may also increase the initiation interval due
to memory operations that belong to recurrences.
. The nonbinding scheme increases the memory pressure since it increases the number of
memory requests, which may produce an increase in the initiation interval. Besides it may
produce an increase in the register pressure since the lifetime of the value used to compute
the effective address is stretched. A higher register pressure may require additional spill
code, which results in additional memory pressure.
In this paper we investigate the interaction between software prefetching and software
pipelining in a VLIW machine. First we show that previous schemes that do not consider the
effect of memory penalties produce schedules that are far from the optimal when they are evaluated
taking into account a realistic cache memory. We evaluate several heuristics to schedule
memory operations and to insert prefetch instructions in a software pipelined schedule. The contributions
of stalls and spill code is quantified for each case, showing that stall penalties have a
much higher impact on performance than spill code. We then propose an heuristic that tries to
trade off both initiation interval and stall time in order to minimize the execution time of a software
pipelined loop. Finally, we show that schemes based on binding prefetch are more effective
than those based on nonbinding prefetch for software pipelined schedules.
The use of binding and nonbinding prefetching has been previously studied in [12][1] and
[4][8][13][17][3] respectively among others. However, to our knowledge there is no previous
work analyzing the interactions of these prefetching schemes with software pipelining techniques.
The selective scheduling ([1]) schedules some operations with cache-hit latency and others with
cache-miss latency, like the scheme proposed in this paper. However the selective scheduling is
based on profiling information whereas our method is based on a static analysis performed at
compile-time. In addition, the selective scheduling does not consider the interactions with software
pipelining.
The rest of this paper is organized as follows. Section 2 motivates the impact that memory
latency may have in a software pipelined loop. Section 3 evaluates the performance of simple
schemes for scheduling load and stores instructions. Section 4 describes the new algorithm proposed
in this paper. Section 5 explains the experimental methodology and presents some performance
results. Finally, the main conclusions are summarized in section 6.
2. Motivation
A software pipelined loop via modulo scheduling is characterized basically by two terms: the initiation
and the stage counter ( ). The former indicates the number of cycles
between the initiation of successive iterations. The latter shows how many iterations are over-
lapped. In this way, the execution time of the loop can be calculated as:
For a given architecture and a given scheduler, the first term of the sum (called compute
time in the rest of the paper) is fixed and it is determined at compile time. The stall time is mainly
due to dependences with previous memory instructions and it depends on the run-time behavior of
the program (e.g. miss ratio, outstanding misses, etc.
In order to minimize the execution time, classical methods have tried to minimize the initiation
interval with the goal of reduce the fixed part of t exec . The minimum initiation interval is
bounded by resources and recurrences:
The is the lower bound due to resource constraints of the architecture and assuming
that all functional units are pipelined, it is calculated as:
where indicates the number of operations of type in the loop body, and
indicates the number of functional units of type in the architecture.
The is the lower bound due to recurrences in the graph and it is computed as:
where represents the sum of all node latencies in the recurrence , and represents
the sum of all edge distances in the recurrence .
For a particular data flow dependence graph and a given architecture, the resulting II is
dependent on the latency that the scheduler assigns to each operation. The latency of operations is
usually known by the compiler except for memory operations, which have a variable latency. The
II also depends on the , which is affected by the spill code introduced by the scheduler.
The other parameters, and , are fixed.
Conventional modulo scheduling proposals use a fixed latency (usually the cache-hit time)
to schedule memory instructions. Scheduling instructions with its minimum latency minimize the
register pressure, and thus, reduces the spill code. On the other hand, this minimum latency scheduling
can increase the stall time because of data dependences. In particular, if an operation needs a
data that has been loaded in a previous instruction but the memory access has not finished yet, the
processor stalls until the data is available.
Figure
1 shows a sample scheduling for a data dependence graph and a given architecture.
In this case, memory instructions are scheduled with cache-hit latency. If the stall time is ignored,
II SC
res II rec
II res
II res max op ARCH NOPS op
NFUS op
y
II rec
II rec max rec GRAPH LAT rec
DIST rec
y
as it is usual in studies dealing with software pipeline techniques, the expected optimistic execution
time will be (suppose is huge):
Obviously this is an optimistic estimation of the actual execution time, which can be rather
inaccurate. For instance, suppose that the miss ratio of the N1 load operation is 0.25 (e.g. it has
stride 1 and there are 4 elements per cache line). Every cache miss the processor stalls some
cycles (called penalty). The penalty for a particular memory instruction depends on the hit
latency, the miss latency and the distance in the scheduling between the memory operation and the
first instruction that uses the data produced by the memory instruction. For the dependence
between N1 and N2 the penalty is 9 cycles, so the stall time assuming that the remaining dependences
do not produce any penalty is:
and therefore
In this case, the actual execution time is near twice the optimistic execution time. If we
assume a miss ratio of 1 instead of 0.25, the discrepancy between the optimistic and the actual
execution time is even higher. In this case, the stall time is:
and therefore
load
mult
load
add
store1357N1
ALU MEM
b) Data flow dependence graph
b) Code scheduling
c) Kernel
Instruction latencies:
load/store
a) Original code
ENDDO
Figure
1. A sample scheduling
@
If all memory references were considered, the effect of the stall time could be greater, and
the discrepancy between the optimistic estimation usually utilized to evaluate the performance of
software pipelined schedulers and the actual performance could be much higher. We can also conclude
that scheduling schemes that try to minimize the stall time may provide a significant advantage
In this paper, the proposed scheduler is evaluated and compared with others using the t exec
metric. This requires to consider the run-time behavior of individual memory references, which
requires the simulation of the memory system.
3. Basic schemes to schedule memory operations
In this section we evaluate the performance of basic schemes to schedule memory operations and
point out the drawbacks of them, which motivates the new approach proposed in the next section.
We have already mentioned in the previous section that modulo scheduling schemes usually
schedule memory operations using the cache-hit latency. This scheme will be called cache-hit
latency (CHL). This scheme is expected to produce a significant amount of processor stalls as
suggested in the previous section.
An approach to reduce the processor stall is to insert a prefetch instruction for every memory
operation. Such instructions are scheduled at a distance equal to the cache-miss latency from
the actual memory references. This scheme will be called insert prefetch always (IPA). However,
this scheme may result in an increase in the number of operations (due to prefetch instructions but
also to some additional spill code) and therefore, it may require an II higher than the previous
approaches.
Finally, an alternative approach is to schedule all memory operations using the cache-miss
latency. This scheme will be called early scheduling always (ESA). This scheme prefetches data
without requiring additional instructions but it may result in an increase in the II when memory
instructions are in recurrences. Besides, it may also require additional spill code.
Figure
2 compares the performance of the above three schemes for some SPECfp95 benchmarks
and two different architectures (details about the evaluation methodology and the architecture
are given in section 5). Each column is split into compute and stall time. In this figure it is
also shown a lower bound on the execution time (OPT). This lower bound corresponds to the execution
of programs when memory operations are scheduled using the cache-hit latency (which
minimizes the spill code) but assuming that they always hit in cache (which results in null stall
time). This lower bound was defined as the optimistic execution time in section 2.
The main conclusion that can be drawn from Figure 2 is that the performance of the three
realistic schemes is far away from the lower bound in general. The CHL scheme results in a significant
percentage of stall time (for the aggressive architecture the stall time represents more than
50% of the execution time for most programs). The IPA scheme reduces significantly the stall
time but not completely. This is due to the fact that some programs (especially tomcatv and swim)
have cache interfering instructions at a very short distance and therefore, the prefetches are not
always effective because they may collide and replace some data before being used. Besides, the
IPA scheme results in a significant increase in the compute time for some programs (e.g., hydro2d
and turb3d among others). The ESA scheme practically eliminates all the stall time. The remaining
stall time is basically due to the lack of entries in the outstanding miss table that is used to
implement a lockup-free cache. However, this scheme increases significantly the compute time
for some programs like the turb3d (by a factor of 3 in the aggressive architecture), mgrid and
hydro2d. This is due to the memory references in recurrences that limit the II.
4. The CSMS algorithm
In this section we propose a new algorithm, which is called cache sensitive modulo scheduling
(CSMS), that tries to minimize both the compute time and the stall time. These terms are not independent
and reducing one of them may result in an increase in the other, as we have just shown in
the previous section. The proposed algorithm tries to find the best trade-off between the two
terms.
The CSMS algorithm is based on early scheduling of some selectively chosen memory
operations. Scheduling a memory operation using the cache-miss latency can hide almost all
memory latency as we have shown in the previous section without increasing much the number of
instructions (as opposed to the use of prefetch instructions). However, it can increase the execution
time in three ways:
. It may increase the register pressure, and therefore, it may increase the due to spill code
if the performance of the loop is bounded by memory operations.
. It may increase because the latency of memory operations is augmented.
. It may increase the because the length of individual loop iterations may be increased.
This augments the cost of the prolog and the epilog.
Figure
2. Basic schemes performance
CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT0.20.61.0
Normalized
Loop
Execution
Time
tomcatv swim su2cor hydro2d mgrid turb3d
SPECfp95CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT CHL IPA ESA OPT0.20.61.0
Normalized
Loop
Execution
Time
tomcatv swim su2cor hydro2d mgrid turb3d
1.257 3.084
a) Simple architecture b) Aggressive architecture
II
II rec
Two of the main issues of the CSMS algorithm is the reduction of the impact of recurrences
on the II and the minimization of the stall time. The problem of the cost of the prolog and epilog
is handled by computing two alternative schedules. Both focus on minimizing the stall time and
the II. However, one of them reduces the impact of the prolog and the epilog at the expense of an
increase in the stall time whereas the other does not care about the prolog and epilog cost. Then,
depending on the number of iterations of the loop, the most effective one is chosen.
The core of the CSMS algorithm is shown in Figure 3. The algorithm makes use of a static
locality analysis in addition to other issues in order to determine the latency to be considered
when scheduling each individual instruction.
The locality analysis is based on the analysis presented in [19]. It is divided into three steps:
. Reuse analysis: computes the intrinsic reuse property of each memory instruction as proposed
in [21]. The goal is to determine the kind of reuse that is exploited by a reference in
each loop. Five types of reuse can be determined: none, self-temporal, self-spatial, group-
temporal and group-spatial.
. Interference analysis: using the initial address of each reference and the previous reuse
analysis, it determines whether two static instructions always conflict in the cache.
Besides, self-interferences are also taken into account by considering the stride exhibited
by each static instruction. References that interfere with themselves or with other refer-
Figure
3. CSMS algorithm
function CSMS(InnerLoop IL)
return Scheduling is
if (RecurrencesInGraph) then
else
endif
if (NITER UpperBound) then
return (sch1)
else
return (sch2)
endif
endfunction
function ComputeSchedMinRecEffect(Graph G)
return Scheduling is
res
foreach (Recurrence R G) do
if (II rec (R) II) then
endif
endforeach
return ComputeScheduling(G)
endfunction
function MinimizeRecurrenceEffect(Rec R, int II)
return integer is
OrderInstructionsByLocality(R)
while (II rec (R) II) do
endwhile
return
endfunction
Figure
4. Scheduling a loop with recurrences
ences are considered not to have any type of locality even if they exhibit some type of
reuse.
. Volume analysis: determines which references cannot exploit its reuse because they have
been displaced from cache. It is based on computing the amount of data that is used by
each reference in each loop.
The analysis concludes that a reference is expected to exhibit locality if it has reuse, it does
not interfere with any other (including itself) and the volume of data between to consecutive
reuses is lower than the cache size.
Initially, two data dependence graphs with the same nodes and edges are generated. The difference
is just the latency assigned to each node. In grph1, each memory node is tagged according
to the locality analysis: it is tagged with the cache-hit latency if it exhibits any type of locality or
with the cache-miss latency otherwise. In grph2, all memory nodes are tagged with the cache-miss
latency. Then, a schedule that minimizes the impact of recurrences on the II is computed for
each graph using the function ComputeSchedMinRecEffect that is shown in Figure 4. The first
step of this function is to change the latency of those memory operations inside recurrences that
limit the II from cache-miss to cache-hit until the II is limited by resources or by a more constraining
recurrence. Nodes to be modified are chosen according to a locality priority order, starting
from the ones that exhibit most locality. Then, the second step is to compute the actual scheduling
using the modified graph. This step can be performed through any of the software pipelined
schedulers proposed in the literature.
Finally, the minimum number of iterations (UpperBound) that ensures that sch2 is better
than sch1 is computed. A main difference between these two schedules is the cost of the prolog
and epilog parts, which is lower for the sch1. This bound depends on the computed schedules and
the results of the locality analysis and it is calculated through an estimation of the execution time
of each schedule. The sch1 is chosen if . The execution time of a given
schedule is estimated as .
The stall time is estimated as
where penalty is calculated as explained in section 2:
and the missratio is estimated by the locality analysis. In this way, sch1 is preferred to sch2 if:
We use a scheduling according to the locality a not the CHL (which achieves the minimum
SC) in order to take into account the possible poor locality of some loops.
5. Performance evaluation of the CSMS
In this section we present a performance evaluation of the CSMS algorithm. We compare its performance
to that of the basic schemes evaluated in section 3. It is also compared with some alter-
est
est
t stall est
NITER penalty op
op MEM
penalty LatMiss CycleUse CycleProd
op MEM
native binding (early scheduling) and nonbinding (inserting prefetch instructions) prefetch
schemes.
5.1. Architecture model
A VLIW machine has been considered to evaluate the performance of the different scheduling
algorithms. We have modeled two architectures in order to evaluate different aspects of the produced
schedulings such as execution time, stall time, spill code, etc.
The first architecture is called simple and it is composed of four functional units: integer,
floating point, branch and memory. The cache-miss latency for the first level cache is 10 cycles.
The second architecture is called aggressive and it has two functional units of each type and the
cache-miss latency is 20 cycles. All functional units are fully pipelined except divide and square
root operations. In both models the first memory level corresponds to a 8Kb lockup-free, direct-mapped
cache with lines of 32 bytes and 8 outstanding misses. Other features of the modeled
architectures are depicted in Table 1.
In the modeled architectures there are two reasons for the processor to stall: (a) when an
instruction requires an operand that is not available yet (e.g., it is being read from the second level
cache), and (b) when a memory instruction produces a cache miss and there are already 8 outstanding
misses.
5.2. Experimental framework
The locality analysis and scheduling task has been performed using the ICTINEO toolset [2].
ICTINEO is a source to source translator that produces a code in which each sentence has semantics
similar to that of current machine instructions. After translating the code to such low-level
representation and applying classical optimizations, the dependence graph of each innermost loop
is constructed according the particular prefetching approach. Then, instructions are scheduled
Other instructions Latency
Machine model Simple Aggressive
Integer
Branch FUs 1 2
Floating
Point
Memory
Cache Size 8 Kb DIV or SQRT or POW 12
Line Size
Outstanding misses 8
Control
Memory latency 1/10 1/20 BRANCH 2
Number of registers 32 CALL or RETURN 4
Table
1. Modeled architectures
using any software pipelining algorithm. The particular software pipelining algorithm used in the
experiments reported here is the HRMS [15], which has been shown to be very effective to minimize
both the II and the register pressure.
The resulting code is instrumented to generate a trace that feeds a simulator of the architec-
ture. Each program was run for the first 100 million of memory references. The performance figures
shown in this section refer to the innermost loops contained in this part of the program. We
have measured that memory references inside innermost loops represent about 95% of all the
memory instructions considered for each benchmark, so the statistics for innermost loops are
quite representative of the whole section of the program.
The different prefetching algorithms have been evaluated for the following SPECfp95
benchmarks: tomcatv, swim, su2cor, hydro2d, mgrid and turb3d. We have restricted the evaluation
to Fortran programs since currently the ICTINEO tool can only process Fortran codes.
5.3. Early scheduling
In this section we compare the CSMS algorithm with other schemes based on early scheduling of
memory operations. These schemes are: (i) use always cache-hit latency (CHL), (ii) use always
cache-miss latency (ESA), and (iii) schedule instructions that have some type of locality using the
cache-hit latency and schedule the remaining ones using the cache-miss latency. This later scheme
will be called early scheduling according to locality (ESL).
The different algorithms have been evaluated in terms of execution time, which is split into
compute and stall time. The stall time is due to dependences or to the lack of entries in the outstanding
miss table. In Figure 5 we can see the results for both the simple and the aggressive
architectures. For each benchmark all columns are normalized to the CHL execution time. It can
be seen that the CSMS algorithm achieves a compute time very close to the CHL scheme whereas
CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS0.20.61.0
Normalized
Loop
Execution
Time
tomcatv swim su2cor hydro2d mgrid turb3d
1.593 1.371
CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS CHL ESA ESL CSMS0.20.61.0
Normalized
Loop
Execution
Time
tomcatv swim su2cor hydro2d mgrid turb3d
a) Simple architecture b) Aggressive architecture
Figure
5. CSMS algorithm compared with early scheduling
it has a stall time very close to the ESA scheme. That is, it results in the best trade-off between
compute and stall time. In programs where recurrences limit the initiation interval, and therefore
the ESA scheme increases the compute time (for instance in hydro2d and turb3d benchmarks) the
CSMS method minimize this effect at the expense of a slight increase in the stall time.
Table
2 shows the relative speed-up of the different schedulers with respect the CHL
scheme. On average all alternative schedulers outperform the CHL scheme (which is usually the
one used by software pipelining schedulers). However, for some programs (mainly for turb3d) the
ESA and ESL schedulers perform worse than the CHL due to the increase in the II caused by
recurrences. The CSMS algorithm achieves the best performance for all benchmarks. For the simple
architecture the average speed-up is 1.61, and for the aggressive architecture it is 2.47.
Table
3 compares the CSMS algorithm with an optimistic execution time (OPT) as defined
in section 3 that is used as a lower bound of the execution time. It also shows the percentage of the
execution time that the processor is stalled. It can be seen that for the simple architecture the
CSMS algorithm is close to the optimistic bound and it does not cause almost any stall. For the
aggressive architecture, the performance of the CSMS is worse than that of OPT and the stall time
represents about 10% of the total execution time. Notice however, that the optimistic bound could
be quite below the actual minimum execution time.
Table
4 compares the different schemes using the CHL algorithm as a reference point. For
each schemes it shows the increase in compute time and the decrease in stall time. As we have
seen before, scheduling memory operations using the cache-miss latency can affect the initiation
interval and the stage counter, which results in an increase in the compute time. The column
denoted as DCompute represents the increment in compute time compared with the CHL scheduling.
For any scheme s, it is calculated as:
The stall time due to dependences can be eliminated by scheduling memory instructions
using the cache-miss latency. By default, spill code is scheduled using the cache-hit latency and
therefore it may cause some stalls, although it is unlikely because the spill code usually is a store
followed by a load to the same address. Since usually they are not close (otherwise the spill code
hardly reduces the register pressure), the load will cause a stall only if it interferes with a memory
ESA ESL CSMS ESA ESL CSMS
tomcatv 2.34 2.28 2.57 3.92 3.41 5.56
su2cor
hydro2d 1.13 1.00 1.45 1.13 1.00 2.78
mgrid 1.15 1.00 1.17 1.12 1.00 1.19
turb3d 0.62 0.73 1.18 0.27 0.33 1.42
GEOMETRIC
MEAN 1.36 1.22 1.61 1.48 1.15 2.47
Table
2. Relative speed-up
DCompute
stall s
reference in between the store and itself. The column denoted as -Stall represents the percentage of
the stall time caused by the CHL algorithm that is avoided. For any scheme s, it is calculated as:
We can see in Table 4 that the CSMS algorithm achieves the best trade-off between compute
time and stall time, which is the reason for outperforming the others. The ESA scheme is the
best one to reduce the stall time but at the expense of a large increment in compute time.
5.4. Inserting prefetch instructions
In order to reduce the penalties caused by memory operations, an alternative to early scheduling
of memory instructions is inserting prefetch instructions, which are provided by many current
instruction set architectures (e.g. the touch instruction of the PowerPC [5]). This new scheme can
introduce additional spill code since it increases the register pressure. In particular, the lifetime of
values that are used to compute the effective address is increased since they are used by both the
OPT/CSMS %Stall OPT/CSMS %Stall
su2cor 0.972 1.92 0.873 11.17
hydro2d 0.978 0.18 0.962 1.84
mgrid 0.998
turb3d 0.951 2.54 0.709 19.54
GEOMETRIC
Table
3. CSMS compared with OPT scheduling
ESA ESL CSMS ESA ESL CSMS
DCompute -Stall(%) DCompute -Stall(%) DCompute -Stall(%) DCompute -Stall(%) DCompute -Stall(%) DCompute -Stall(%)
su2cor 1.048 100.00 1.015 2.54 1.009 95.90 1.215 97.67 1.060 4.09 1.018 93.27
hydro2d 1.308 99.99 1.008 3.79 1.021 99.62 2.532 99.85 1.067 4.84 1.020 98.98
mgrid 1.023 99.89 0.999 3.59 1.001 99.68 1.469 87.58 1.030 5.19 1.337 87.57
turb3d 1.184 94.35 1.055 85.75 1.184 94.35 7.222 98.21 5.948 87.59 1.134 72.48
GEOMETRIC
Table
4. Increment of compute time and decrement of stall time in relation to the CHL
t stall CHL
stall s
prefetch and ordinary memory instructions. It can also increase the initiation interval due to additional
memory instructions.
We have evaluated three alternative schemes to introduce prefetch instructions: (i) insert
prefetch always (IPA), (ii) insert prefetch for those references without temporal locality even if
they exhibit spatial locality, according to the static locality analysis (IPT), and (iii) insert prefetch
for those instructions without any type of locality (IPL). The first scheme is expected to result in a
very few stalls but it requires many additional instructions, which may increase the II. The IPT
scheme is more selective when adding prefetch instruction. However, it adds unnecessary
prefetch instructions for some references with just spatial locality. Instructions with only spatial
locality will cause a cache miss only when a new cache line is accessed if it is not in cache. The
IPL scheme is the most conservative in the sense that it adds the less number of prefetch instructions
In
Figure
6 it is compared the total execution time of the CSMS scheduling against the
above-mentioned prefetching schemes. The figures are normalized to the CHL scheduling. The
CSMS scheme always performs better than the schemes based on inserting prefetch instructions
except for the mgrid benchmark in the aggressive architecture. In this latter case, the IPA scheme
is the best one but the performance of the CSMS is very close to it.
Among the schemes that insert prefetch instructions, none of them outperforms the others in
general. Depending on the particular program and architecture, the best one among them is a different
one. The prefetch schemes outperform the CHL scheme in general (i.e. the performance
figures in Figure 6 are in general lower than 1) but in some cases they may be even worse than the
CHL, which is in general worse than the schemes that are based on early scheduling.
Comparing binding (Figure 5) with nonbinding (Figure schemes, it can be observed that
binding prefetch is always better for the three first benchmarks. Both schemes have similar per-
a) Simple architecture b) Aggressive architecture
Figure
6. CSMS algorithm compared with inserting prefetch instructions
IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS0.20.61.0
Normalized
Loop
Execution
Time
tomcatv swim su2cor hydro2d mgrid turb3d
IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS IPA IPT IPL CSMS0.20.61.0
Normalized
Loop
Execution
Time
tomcatv swim su2cor hydro2d mgrid turb3d
formance for the next two benchmarks and only for the last one, nonbinding prefetch outperforms
the binding schemes.
To understand the reasons for the behavior of the prefetch schemes, we present below some
additional statistics for the aggressive architecture. Table 5 shows the percentage of additional
memory instructions that are executed for the CSMS algorithm and for those schemes based on
inserting prefetch instructions. In the CSMS algorithm, additional instructions are only due to
spill code whereas in the other schemes they are due to spill code and prefetch instructions. We
can see in this table that, except for the IPL scheme for the mgrid benchmark, the prefetch
schemes require much higher number of additional memory instructions. As expected, the
increase in number of memory instructions of the IPA scheme is the highest, followed by IPT,
then the IPL and finally the CSMS.
Table
6 shows the increase in compute time and the decrease in stall time of the schemes
based on inserting prefetch instructions in relation to the CHL scheme. Negative numbers indicate
that the stall time is increased instead of decreased.
We can see in Table 6 that the compute time is increased by prefetching schemes since the
large number of additional instructions may imply a significant increase in the II for those loops
that are memory bound. The stall time is in general reduced but the reduction is less than that of
the CSMS scheme (see Table 4). The program mgrid is the only one for which there is a prefetch
based scheme (IPA) that outperforms the CSMS algorithm. However, the difference is very slight
and for the remaining programs the performance of the CSMS scheme is overwhelmingly better
than that the IPA scheme.
Table
7 shows the miss ratio of the different prefetching schemes compared with the miss
ratio of a nonprefetching scheme (CHL). We can see that in general the schemes that insert most
memory prefetches produce the highest reductions in miss ratio. However, inserting prefetch
instructions do not remove all cache misses, even for the scheme that inserts a prefetch for every
memory instruction (IPA). This is due to cache interferences between prefetch instructions before
1.There is spill code, but not in the simulated part of the
program.
CSMS
INSERTING PREFETCH INSTR.
IPA IPT IPL
su2cor
hydro2d 2.12 55.49 39.94 2.85
mgrid 49.90 59.26 56.57 7.50
Table
5. Percentage of additional memory references
the prefetched data is used. This is quite common in the programs tomcatv and swim. For
instance, if two memory references that interfere in the cache are very close in the code, it is likely
that the two prefetches corresponding to them are scheduled before both memory references. In
this case, at least one of the two memory references will miss in spite of the prefetch. Besides, if
the prefetches and memory instructions are scheduled in reverse order (i.e., instruction A is
scheduled before B but the prefetch of B is scheduled before the prefetch of A), both memory
instructions will miss.
To summarize, there are two main reasons for the bad performance of the schemes based on
inserting prefetch instructions when compared with the CSMS scheme:
. They increase the compute time due to the additional prefetch instructions and spill code.
. They are not always effective in removing stalls caused by cache misses due to interferences
between the prefetch instructions.
IPA IPT IPL
DCompute -Stall(%) DCompute -Stall(%) DCompute -Stall(%)
tomcatv 1.396 23.28 1.060 67.14 1.073 19.42
su2cor 1.454 74.48 1.269 82.20 1.090 -3.25
hydro2d 1.608 84.87 1.086 86.81 1.003 4.16
mgrid 1.311 88.32 1.267 35.44 1.030 5.37
turb3d 1.874 68.91 1.787 73.90 1.497 82.60
GEOMETRIC
Table
6. Increment of compute time and decrement of stall time for
schemes based on inserting prefetch instructions
CHL IPA IPT IPL
su2cor 25.43 2.35 5.68 21.55
hydro2d 19.57 1.33 5.04 18.80
mgrid 6.46 0.57 2.91 5.35
turb3d 10.68 2.11 2.39 2.64
GEOMETRIC
Table
7. Miss ratio for the CHL and the different prefetching schemes
6. Conclusions
The interaction between software prefetching and software pipelining techniques for VLIW architectures
has been studied. We have shown that modulo scheduling schemes using cache-hit
latency produce many stalls due to dependences with memory instructions. For a simple architecture
the stall time represents about 32% of the execution time and 63% for an aggressive architec-
ture. Thus, ignoring memory effects when evaluating a software pipelined scheduler may be
rather inaccurate.
We have compared the performance of different prefetching approaches based on either
early scheduling of memory instructions (binding prefetch) or inserting prefetch instructions
(nonbinding prefetch). We have seen that both provide a significant improvement in general.
However, methods based on early scheduling outperform those based on inserting prefetches. The
main reasons for the worse performance of the latter methods are the increase in memory pressure
due to prefetch instructions and additional spill code, and their limitation to remove short-distance
conflict misses.
We have proposed an heuristic scheduling algorithm (CSMS), which is based on early
scheduling, that tries to minimize both the compute and the stall time. The algorithm makes use of
a static locality analysis to schedule instructions in recurrences. We have shown that it outperforms
the rest of strategies. For instance, when compared with the approach based on scheduling
memory instructions using the cache-hit latency, the produced code is 1.6 times faster for a simple
architecture, and 2.5 times faster for an aggressive architecture. In the former case, we have also
shown that the execution time is very close to an optimistic lower bound.
--R
--TR
Software pipelining: an effective scheduling technique for VLIW machines
Software prefetching
A data locality optimizing algorithm
Circular scheduling
An architecture for software-controlled data prefetching
Design and evaluation of a compiler algorithm for prefetching
Lifetime-sensitive modulo scheduling
Balanced scheduling
Evolution of the PowerPC Architecture
Iterative modulo scheduling
Minimizing register requirements under resource-constrained rate-optimal software pipelining
Optimum modulo schedules for minimum register requirements
Compiler techniques for data prefetching on the PowerPC
Stage scheduling
Hypernode reduction modulo scheduling
Predictability of load/store instruction latencies
Decomposed Software Pipelining
Static Locality Analysis for Cache Management
Swing Modulo Scheduling
--CTR
Jess Snchez , Antonio Gonzlez, Instruction scheduling for clustered VLIW architectures, Proceedings of the 13th international symposium on System synthesis, September 20-22, 2000, Madrid, Spain
Enric Gibert , Jess Snchez , Antonio Gonzlez, Effective instruction scheduling techniques for an interleaved cache clustered VLIW processor, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey
Jess Snchez , Antonio Gonzlez, Modulo scheduling for a fully-distributed clustered VLIW architecture, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.124-133, December 2000, Monterey, California, United States
Enric Gibert , Jess Snchez , Antonio Gonzlez, Local scheduling techniques for memory coherence in a clustered VLIW processor with a distributed data cache, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Two-level hierarchical register file organization for VLIW processors, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.137-146, December 2000, Monterey, California, United States
Enric Gibert , Jess Snchez , Antonio Gonzlez, An interleaved cache clustered VLIW processor, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Modulo scheduling with integrated register spilling for clustered VLIW architectures, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Alex Alet , Josep M. Codina , Jess Snchez , Antonio Gonzlez, Graph-partitioning based instruction scheduling for clustered processors, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Enric Gibert , Jesus Sanchez , Antonio Gonzalez, Distributed Data Cache Designs for Clustered VLIW Processors, IEEE Transactions on Computers, v.54 n.10, p.1227-1241, October 2005
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Software and hardware techniques to optimize register file utilization in VLIW architectures, International Journal of Parallel Programming, v.32 n.6, p.447-474, December 2004 | locality analysis;software pipelining;software prefetching;VLIW machines |
266836 | Resource-sensitive profile-directed data flow analysis for code optimization. | Instruction schedulers employ code motion as a means of instruction reordering to enable scheduling of instructions at points where the resources required for their execution are available. In addition, driven by the profiling data, schedulers take advantage of predication and speculation for aggressive code motion across conditional branches. Optimization algorithms for partial dead code elimination (PDE) and partial redundancy elimination (PRE) employ code sinking and hoisting to enable optimization. However, unlike instruction scheduling, these optimization algorithms are unaware of resource availability and are incapable of exploiting profiling information, speculation, and predication. In this paper we develop data flow algorithms for performing the above optimizations with the following characteristics: (i) opportunities for PRE and PDE enabled by hoisting and sinking are exploited; (ii) hoisting and sinking of a code statement is driven by availability of functional unit resources; (iii) predication and speculation is incorporated to allow aggressive hoisting and sinking; and (iv) path profile information guides predication and speculation to enable optimization. | Introduction
Data flow analysis provides us with facts about
a program by statically analyzing the program. Algorithms
for partial dead code elimination (PDE)
Copyright 1997 IEEE. Published in the Proceedings of
Micro-30, December 1-3, 1997 in Research Triangle Park, North
Carolina. Personal use of this material is permitted. However,
permission to reprint/republish this material for advertising or
promotional purposes or for creating new collective works for
resale or redistribution to servers or lists, or to reuse any copyrighted
component of this work in other works, must be obtained
from the IEEE. Contact: Manager, Copyrights and Permissions
908-562-3966.
y Supported in part by NSF PYI Award CCR-9157371, NSF
grant CCR-9402226, Intel Corporation, and Hewlett Packard.
[16, 6, 4, 20] and partial redundancy elimination
(PRE) [17] solve a series of data flow problems to carry
out sinking of assignments and hoisting of expression
evaluations. The sinking of an assignment eliminates
executions of the assignment that compute values that
are dead, i.e., values that are never used. The hoisting
of an expression eliminates evaluations of the expression
if a prior evaluation of the same expression
was performed using the same operands. The existing
algorithms for above optimizations suffer from the following
drawbacks that limit their usefulness in a realistic
compiling environment in which the optimization
phase precedes the instruction scheduling phase.
ffl The optimization algorithms are insensitive to resource
information. Thus, it is possible that instructions
that require the same functional unit
resource for execution are moved next to each
other. Such code motion is not beneficial since the
instruction scheduler in separating these instructions
from each other may undo the optimization.
ffl The data flow analyses used by the algorithms
are incapable of exploiting profiling information
to drive code sinking and hoisting. Instruction
schedulers on the other hand use profiling information
to drive code hoisting and sinking.
ffl The data flow analyses do not incorporate speculation
and predication to enable code hoisting and
sinking. Instruction schedulers for modern processor
architectures exploit speculation [18] and
predication [13, 14] during hoisting and sinking.
In this paper we present solutions to the above
problems by developing data flow analysis techniques
for PRE and PDE that incorporate both resource
availability and path profiling [1] information. Fur-
thermore, the formulations of hoisting and sinking are
generalized to incorporate speculation based hoisting
and predication enabled sinking. Our approach performs
code motion for PRE and PDE optimizations
such that code motion is restricted to situations in
which the resulting code placement points are those at
which the required functional unit resources are avail-
able. Moreover we are able to perform code motion
more freely than existing PDE and PRE algorithms
[16, 17, 21, 5, 15] through speculative hoisting and
predication enabled sinking. Finally, path profiling
[1] information is used to guide speculation and pred-
ication. In particular, speculation and predication is
applied only if their overall benefit in form of increased
code optimization along frequently executed program
paths is greater than their cost in terms of introducing
additional instructions along infrequently executed
program paths.
(a)8 9614 5
(b)
(c)
Figure
1: Resource Sensitive Code Sinking.
The example in Figure 1 illustrates our approach
for code sinking. The flow graph in Figure 1a contains
statement node 8 which is partially
dead since the value of x computed by the statement
is not used along paths 10-8-7-3-1 and 10-8-7-6-4-2-1.
Through sinking of the statement to node 5, as shown
in Figure 1b, we can completely eliminate the deadness
of the statement. Note that in order to enable sinking
past node 7 it is necessary to predicate the statement.
Furthermore this sinking should only be performed if
the paths 10-8-7-3-1 and 10-8-7-6-4-2-1 along which
dead code is eliminated are executed more frequently
than the path 10-9-7-6-5-2-1 along which an additional
instruction has been introduced.
If the functional unit for the multiply operation is
expected to be busy at node 5 and idle at node 6, then
resource sensitive sinking will place the statement at
node 6 as shown in Figure 1c. As before, the predication
of the statement is required to perform sinking
past node 7. However, the sinking past 7 is only performed
if the frequency with which the path 10-8-7-3-1
(along which dead code is eliminated) is executed is
greater than the sum of the frequencies with which
paths 10-9-7-6-4-2-1 and 10-9-7-6-5-2-1 (along which
an additional instruction is introduced) are executed.
The above solution essentially places the statement
at a point where the required resource is available and
eliminates as much deadness as possible in the process.
It also performs predication enabled sinking whenever
it is useful.
The example in Figure 2 illustrates our approach
for code hoisting. The flow graph in Figure 2a contains
partially redundant evaluation of expression x+y
in node 8 since the value of x + y is already available
at node 8 along paths 1-3-7-8-10 and 1-2-4-6-7-8-
10. Through hoisting of the expression to node 5, as
shown in Figure 1b, we can eliminate the redundancy.
Note that in order to enable hoisting above node 7, it
is necessary to perform speculative motion of x y.
Furthermore this hoisting should only be performed if
the paths 1-3-7-8-10 and 1-2-4-6-7-8-10 along which redundancy
is eliminated are executed more frequently
than the path 1-2-5-6-7-9-10 along which an additional
instruction is introduced.2
h=x+y h=x+y h=x+y2
h=x+y h=x+y
h=x+y
(c)2
h=x+y h=x+y
.=x+y
Figure
2: Resource Sensitive Code Hoisting.
If the functional unit for the add operation is expected
to be busy at node 5 and idle at node 6, then
resource sensitive hoisting will place the statement at
node 6 as shown in Figure 1c. As before, speculative
execution of the statement is required to perform
hoisting above node 7. However, the hoisting past 7
is only performed if the frequency with which path
1-3-7-8-10 (along which redundancy is eliminated) is
executed is greater than the frequency with which the
paths 1-2-4-6-7-9-10 and 1-2-5-6-7-9-10 (along which
an additional instruction is introduced) are executed.
The above solution places the expression at a point
where the required resource is available and eliminates
as much redundancy as possible in the process. It also
performs speculative code hoisting whenever it is useful
The remainder of this paper is organized as follows.
In section 2 we first present extensions to PDE and
solutions developed by Knoop et al. [16, 17] to
achieve resource sensitive PDE and PRE. In section 3
we present extensions to resource sensitive PDE and
PRE algorithms that incorporate path profiling information
to drive speculation and predication. Concluding
remarks are given in section 4.
Resource-Sensitive Code Motion
and Optimization
The basic approach used by our algorithms is to
first perform analysis that determines whether or not
resources required by a code statement involved in
sinking(hoisting) will be available along paths origi-
nating(terminating) at a point. This information is
used by the algorithms for PDE(PRE) to inhibit sink-
ing(hoisting) along paths where the required resource
is not free. Therefore, optimization opportunities are
exploited only if they are permitted by resource usage
characteristics of a program.
In all the algorithms presented in this paper we
assume that the program is represented using a flow
graph in which each intermediate code statement appears
in a distinct node and nodes have been introduced
along critical edges to allow code placement
along the critical edges. The first assumption simplifies
the discussion of our data flow equations and is not
essential for our techniques to work. Given this repre-
sentation, we next describe how to determine whether
a resource is locally free at the exit or entry of a node.
This local information will be propagated through the
flow graph to determine global resource availability in-
formation. In the subsequent sections we present resource
analysis which is applicable to acyclic graphs.
The extension to loops is straightforward and is based
upon the following observation. A resource free with
in a loop is not available for hoisting/sinking of instructions
from outside the loop since instructions are
never propagated from outside the loop to inside the
loop.
A functional unit resource FU to which
an instruction may be issued every c cycles is free at
node n if the instructions issued during the c \Gamma 1 cycles
prior to n and following n are not issued to FU .
The above definition guarantees that if an instruction
is placed at n that uses FU , then its issuing will
not be blocked due to unavailability of FU . Notice
that the above definition can be easily extended to
consider situations in which more than one copy of
the resource FU is available in the architecture.
2.1 Code Sinking and PDE
Partial dead code elimination is performed by sinking
partially dead assignments. Through sinking of a
partially dead assignment s we migrate s to program
points where resources required by s are available and
at the same time we remove s from some paths along
which it is dead, that is, the value computed by s is not
used. In order to ensure that sinking of s is only performed
if it can be guaranteed that placement points
for s after sinking will be ones at which the resource
required for its execution is available, we first develop
resource anticipatability analysis. The results of this
analysis guide the sinking in a subsequent phase that
performs resource sensitive code sinking and PDE.
Resource Anticipatability Analysis
The resource anticipatability data flow analysis determines
the nodes past which the sinking of an assignment
s will not be inhibited by the lack of resources.
A functional unit resource required to
execute an assignment statement s is anticipatable
at the entry of a node n if for each path p from n's
entry to the terminating node in the flow graph one
of the following conditions is true:
ffl the value of the variable defined by s at n's entry
is dead along path p, that is, the resource will not
be needed along this path since the assignment
can be removed from it; or
ffl there is a node b along p in which the required
resource is locally free and statement s is sinkable
to b, that is, the required resource will be available
after sinking.
To perform resource anticipatability analysis for an
assignment s, we associate the following data flow variables
with each node:
PRES s (n) is 1 if given that the resource required by
s is anticipatable at n's exit, it is also anticipatable at
n's entry; otherwise it is 0. In particular, PRES s (n)
is 1 if the statement in n does not define a variable
referenced (defined or used) by s.
(n) is 1 if the variable defined by s is dead at
n's entry (i.e., the value of the variable at n's entry is
never used). If the variable is not dead, then the value
of DEAD s
(n) is 0.
(n) is 1 if the resource required by s is free for
its use when s is moved to n through sinking; otherwise
it is 0.
(n)) is 1 if the resource
required by s is anticipatable at n's exit(entry); otherwise
it is 0.
In order to compute resource anticipatability we
perform backward data flow analysis with the and confluence
operator as shown in the data flow equations
given below. The resource used by s is anticipatable
at n's exit if it is anticipatable at the entries of all successors
of n. The resource is anticipatable at n's entry
if the resource is free for use by s in n, or the variable
defined by s is dead at n's entry, or the resource is
anticipatable at n's exit and preserved through n.
N\GammaRANTs (m)
(X\GammaRANTs (n) -PRESs(n))
Assignment Sinking
The assignment sinking and PDE framework that
we use next is an extension of the framework developed
by Knoop et al. [16]. PDE is performed in the following
steps: assignment sinking followed by assignment
elimination. The first step is modified to incorporate
resource anticipatability information while the second
step remains unchanged. Assignment sinking consists
of delayability analysis followed by identification of insertion
points for the statement being moved. The
data flow equations for delayability analysis and the
computation of insertion points are specified below.
In this analysis X\GammaDLY s
assignment
s can be delayed up to the exit(entry) of node
n. BLOCK s
(n) is 1 for a node that blocks sinking
of s due to data dependences; otherwise it is 0. As
we can see delayability analysis only allows sinking of
s to the entry of node n if the required resource is
anticipatable at n's entry and it allows sinking from
entry to exit of n if s is not blocked by n. The assignment
is removed from its original position and inserted
at points that are determined as follows. The assignment
s is inserted at n's entry if it is delayed to n's
entry but not its exit and s is inserted at n's exit if
it is delayed to n's exit but not to the entries of all
of n's successors. Assignment deletion eliminates the
inserted assignments that are fully dead.
X\GammaDLYs (n)
N\GammaDLYs (n)
X\GammaDLYs (m) owise
:N\GammaDLYs (m)
The example in Figure 3 illustrates our algorithms
by considering the sinking of assignment in node 1. In
Figure 3a a control flow graph and the results of resource
anticipatability analysis are shown. The node
7 is partially shaded to indicate that the resource is
anticipatable at the node's exit but not its entry. Figure
3b shows the outcome of delayability analysis. Notice
that the sinking of assignment past node 3 is inhibited
since the resource is not anticipatable at 3's suc-
cessors. Insertion point computation identifies three
insertion points, the exit of 3, entry of 7 and exit of
8. Of these points the assignment is dead at 7's entry
and is therefore deleted. Deadness of assignment along
path 1-2-4-7-10-11 is removed while along path 1-2-3-
5-9-11 it is not removed. Notice that finally
is placed at nodes 3 and 8 where the resource is locally
free.
2.2 Code Hoisting and PRE
Partial redundancy elimination is performed by
hoisting expression evaluations. Through hoisting of a
partially redundant expression e we migrate e to program
points where resources required by e are available
and at the same time we remove e evaluations
from some paths along which e is computed multiple
times making the later evaluations of e along the path
redundant. In order to ensure that hoisting of e is
only performed if it can be guaranteed that placement
points for e after hoisting will be ones at which the
resource required for e's execution is free, we perform
resource availability analysis. Its results are used to
guide hoisting in a subsequent phase that performs resource
sensitive code hoisting and PRE.
Resource Availability Analysis
The resource availability data flow analysis determines
the nodes above which the hoisting of statement
s will not be inhibited by the lack of resources.
A functional unit resource needed to execute
the operation in an expression e is available at
the entry of node n if for each path p from the start
node to n's entry one the following conditions is true:
ffl there is a node b in which resource is locally free
and along the path from b to node n's entry, the
variables whose values are used in e are not rede-
fined. This condition ensures that upon hoisting
the expression along the path a point would be
found where the resource is free.
ffl there is a node b which computes e and after its
computation the variables whose values are used
in e are not redefined. This condition essentially
=.
a=.
(c) Insertion Point Selection
and assignment deletion.
if p2
9 10x=. x=.
a=.
(a) Resource Anticipability Analysis.
resource locally free
resource globally anticipable
x=a*b
if p2
9 10x=.
x=a*b
x=.
(b) Delayability Analysis.
if p2
9 10x=.
x=.
x=a*b
x=a*b
insertion points
x=a*b is delayable
Figure
3: An Example of Resource-Sensitive PDE.
implies that if an earlier evaluation of the expression
exists along a path then no additional use
of the resource is required during hoisting along
that path since the expression being hoisted will
be eliminated along that path.
To perform resource availability analysis for an expression
e, we associate the following data flow variables
with each node:
(n) is 1 if given that the resource required by
e is available at n's entry, it is also available at n's
exit; otherwise it is 0. In particular, PRES e
(n) is 1
if the statement in n does not define a variable used
by e.
USED e (n) is 1 if the statement in n evaluates the
expression e and this evaluation of e is available at n's
exit, that is, the variables used by e are not redefined
in n after e's evaluation.
(n) is 1 if the required resource is free for use
by e in n when e is moved to n through hoisting; otherwise
it is 0.
(n)) is 1 if the resource
required by e is available at n's entry(exit); otherwise
it is 0.
In order to compute resource availability we perform
forward data flow analysis with the and confluence
operator. The resource used by expression e is
available at n's entry if it is available at the exits of
predecessors of n. The resource is available at n's exit
if it is available at n's entry and preserved by n, the
expression is computed by n and available at n's exit,
or resource is locally free at n.
X\GammaRAVLe (m)
Expression Hoisting
The expression hoisting and PRE framework that
we use next is a modification of the code motion frame-work
developed by Knoop et al. [17]. PRE is performed
in two steps: down-safety analysis which determines
the points to which expression evaluations
can be hoisted and earliestness analysis which locates
the earliest points at which expression evaluations are
actually placed to achieve PRE. These steps are modified
to incorporate resource availability information.
The modified equations for down-safety and earliest-
ness analysis are given below. In these equations
the expression can
be hoisted to the entry(exit) of node n; otherwise it is
An expression is down-safe at a node as long as it
is anticipatable along all paths leading from the node
and the required resource is available along all paths
leading to the node. The earliestness analysis sets
(n)) to 1 up to and including
the first down-safe point at which the required resource
is free or an expression evaluation exists. Along
each path an expression evaluation is placed at the
earliest down-safe point. These points are identified
by the boolean predicates N\GammaDSafeEarliest e
and
X\GammaDSafeEarliest e
(a) Resource Availability Analysis.2
9 10resource locally free
resource globally available
a=.
(b) Down Safety Analysis.
a=.
a*b is down safe2
9 10earliest points
and PRE transformation.
(c) Earliestness Analysis
a=.
Figure
4: An Example of Resource-Sensitive PDE.
N\GammaERLYe (n)
USEDe(m)))- X\GammaERLYe (m)
The example in Figure 4 illustrates the above al-
gorithms. In Figure 4a the results of resource availability
analysis are shown. Since the resource is available
at node 11, the down-safety analysis will propagate
it backwards as shown in Figure 4b. Node 6 is
not down-safe because resource is not available at that
node. On the other hand node 8 is down-safe because
the resource is available at that node. The earliest-
ness analysis identifies nodes 5, 9, 7, and 8 as the first
nodes which are down-safe and where either resource
or expression evaluation exists
(node 5 and 7). The final placement of the expression
is shown in Figure 4c. Traditional approach would
have hoisted the expression above node 9 to node 6.
3 Profile-Directed Resource-Sensitive
Code Motion and Optimization
In this section we show that additional opportunities
for PRE and PDE optimizations can be exploited
by enabling more aggressive code hoisting and sink-
ing. Speculative code hoisting can be performed to enable
additional opportunities for PRE while predication
based code sinking can be employed to enable additional
opportunities for PDE. However, while speculative
hoisting and predication based sinking result in
a greater degree of optimization along some program
paths, they result in introduction of additional instructions
along other program paths. In other words
a greater degree of code optimization is achieved for
some program paths at the expense of introduction of
additional instructions along other program paths.
While generating code for VLIW and superscalar
architectures, speculation and predication are routinely
exploited to generate faster schedules along frequently
executed paths at the expense of slower schedules
along infrequently executed paths [7, 12, 11].
However, the optimization frameworks today are unable
to exploit the same principle. In this section we
show how to perform PRE and PDE optimizations
by using speculation and predication. Frequently executed
paths are optimized to a greater degree at the
expense of infrequently executed paths. Path profiling
information is used to evaluate the benefits and costs
of speculation and predication to program paths.
In the subsequent sections we describe code hoisting
and sinking frameworks which use path profiling
[1] information to enable speculation and predication
based hoisting and sinking while inhibiting hoisting
and sinking using resource availability and anticipata-
bility information. This results in optimization algorithms
that are more aggressive than traditional algorithms
[16, 17] while at the same time more appropriate
for VLIW and superscalar environment as
they are resource sensitive and can trade-off the quality
of code for frequently executed paths with that
of infrequently executed paths. Although the techniques
we describe are based upon path profiling information
they can also be adapted for edge profiles
since estimates of path profiles can be computed from
edge profiles [19]. Furthermore we present versions of
our algorithms that apply to acyclic graphs. However,
the extensions required to handle loops are straight-forward
and can be found in [10, 9].
Our algorithms are based upon the following analysis
steps. First resource availability and anticipatabil-
ity analysis is performed. Next we determine the cost
and benefit of enabling speculation and predication at
various spilt points and merge points in a flow graph
respectively. The benefit is an estimation of increased
optimization of some program paths while the cost is
an estimate of increase in the number of instructions
along other program paths. By selectively enabling
hoisting and sinking at program points based upon
cost-benefit analysis, we exploit optimization opportunities
that the traditional algorithms such as those
by Knoop et al. [16, 17] do not exploit while inhibiting
optimization opportunities that result in movement of
code to program points at which the resource required
for an instructions execution is not free.
3.1 Path Profile Directed PDE
As mentioned earlier, the resource anticipatability
analysis described in section 2.1 remains unchanged
and must be performed first. Next cost-benefit analysis
that incorporates resource anticipatability information
and uses path profiles is performed. The results
of this analysis are used to enable predication enabled
sinking at selected join points in the next phase. Finally
an extension of the sinking framework presented
in section 2.1 is used to perform resource-sensitive,
profile-guided PDE.
The cost-benefit analysis consists of three steps: (a)
Availability analysis identifies paths leading to a node
along which a statement is available for sinking (i.e.,
sinking is not blocked by data dependences) at various
program points; (b) Optimizability analysis identifies
paths originating at a node along which a statement
can be optimized because the value computed by it
is not live and the sinking required for its removal is
not inhibited by the lack of a free resource or presence
of data dependences; and (c) Cost-benefit computation
identifies the paths through a join point along which
additional optimization is achieved or an additional instruction
is introduced when predication based sinking
is enabled at the join point. By summing the frequencies
of the respective paths, provided by path profiles,
the values of cost and benefit are obtained.
The set of paths identified during availability and
optimizability analysis are represented by a bit vector
in which each bit corresponds to a unique path from
the entry to the exit of the acyclic flow graph. To facilitate
the computation of sets of paths, with each node
n in the flow graph, we associate a bit vector OnP s(n)
where each bit corresponds to a unique path and is set
to 1 if the node belongs to that path; otherwise it is
set to 0.
The steps of the analysis are described next.
Availability Analysis
In the data flow equations for availability analysis
given below
(n)) is a one
bit variable which is 1 if there is a path through n
along which s is available for sinking at n's entry(exit);
otherwise its value is
is a bit vector which holds the set of paths along which
the value of N \Gamma AV L a
is 1 at n's
entry(exit).
Forward data flow analysis with the or confluence
operation is used to compute these values. At the
entry point of the flow graph the availability value
is set to 0, it is changed to 1 when statement a is
encountered, and it is set to 0 if a statement that
blocks the sinking of a is encountered. In the equations
PRES a
(n) is a one bit variable which is 1(0) if
preserves a, that is, n is not data (anti, output or
flow) dependent upon a. At the entry to a node n for
which
(n) is 0, the set of paths is set to null,
that is, to ~ 0. Otherwise the paths in N \Gamma APS a
are
computed by unioning the sets of paths along which a
is available at the exit of one of n's predecessors (i.e.,
unioning
(p), where p is a predecessor of n).
In order to ensure that only paths that pass through n
are considered, the result is intersected with OnP s(n).
The value of X \Gamma APS a (n) is OnP s(n) if n contains
a and N \Gamma APS a (n) if n does not block a.
OnPs(n)-
X\GammaAVL a (m)=1
Optimizability Analysis
a (n)) is a one bit variable
associated with n's entry(exit) which is 1 if there is a
path through n along which a is dead and any sinking
of a that may be required to remove this deadness is
feasible (i.e., it is not inhibited by lack of resources
or presence of data dependences); otherwise its value
is 0. Backward data flow analysis with the or confluence
operation is used to compute these values. In
order to ensure that the sinking of a is feasible, the
results of a's availability analysis and resource antici-
patability analysis are used. For example, if variable v
computed by a is dead at n's exit, then
is set to true only if
(n) is true because
the deadness will only be eliminated if sinking of a to
n's exit is not blocked by data dependences. If v is
not dead then among other conditions we also check
a (n) is true because the sinking of a will
only be allowed if the resource required for a's execution
along paths where v is not dead is free. In the
data flow equations
is a one bit variable which is 1 if variable v is fully dead
at n's entry(exit), that is, there is no path starting at
n along which current value of v is used; otherwise its
value is 0.
(n)) is a bit vector which
holds the set of paths along which the value of
OPT a
is 1 at n's entry(exit). At the
entry(exit) of a node n for which
(n)) and N \Gamma AV L a
are 1,
(n)) is set to OnP s(n). Otherwise
the paths in X \Gamma OPS a
(n) are computed by
unioning the sets of paths along which a is partially
dead and removable at the entry of one of n's successors
(i.e., by unioning O \Gamma RPS a
(p), where p is a
successor of n). In order to ensure that only paths that
pass through n are considered, the result is intersected
with OnP s(n).
let v be the variable defined by s; i:e:;
OnPs(n)-
N\GammaOPT a (m)=1
Computation
The cost of enabling predication of a partially dead
statement a to allow its movement below a merge
point n is determined by identifying paths through
the merge point along which in the unoptimized program
a is not executed and in the optimized program
a predicated version of a is executed. Furthermore
resource anticipatability analysis indicates that along
paths where predicated version of a is placed, the resource
needed by a is available. The sum of the execution
frequencies of the above paths, as indicated by
path profiles, is the cost.
The benefit of enabling predication of a partially
dead statement a to allow its movement below a merge
point n is determined by identifying paths through the
merge point along which in the unoptimized program
a is executed while in the optimized program a is not
executed. Furthermore the resource anticipatability
analysis indicates that the sinking of a required to
achieve the above benefit is not inhibited by lack of
resources. The sum of the execution frequencies of
the above paths, as indicated by path profiles, is the
benefit.
Code Sinking Framework
The results of the cost-benefit analysis are incorporated
into a code sinking framework in which predication
of a code statement is enabled with respect to
the merge points only if resources are available and the
benefit of predication enabled sinking is determined to
be greater than the cost of predication enabled sink-
ing. This framework is an extension of the code sinking
framework presented in section 2.1.
The data flow equations for enabling predication
are presented next. Predication enabled sinking is allowed
at join nodes at which the cost of sinking is less
than the benefit derived from sinking. In addition,
sinking is also enabled at a join node if it has been
enabled at an earlier join node. This is to ensure that
the benefits of sinking computed for the earlier join
node can be fully realized.
EPREDa (m)
is a join point
The delayability analysis of section 2.1 is modified
to incorporate the results of enabling predication as
shown below. At a join point if predication based
sinking is enabled then as long as the assignment is
available along some path (as opposed to all paths in
section 2.1), it is allowed to propagate below the join
node.
N\GammaDLYa (n) -PRESa (n) owise
N\GammaDLYa (n)
if n is
a join
Consider the paths that contribute to cost and benefit
of sinking assignment x = a b in node 8 past the
join node 7 in the flow graph of Figure 1a. Availability
analysis will determine that the paths that initially
contain the subpath 10-8-7 are the ones along which
statement is available for sinking at join
node 7. Optimizability analysis will determine that
the paths that end with subpath 7-3-1 are optimizable
while the paths that end with 7-6-4-2-1 and 7-6-5-2-1
are unoptimizable. Although x = a b is dead along
the subpath 7-6-4-2-1, the lack of resources inhibits
sinking necessary to eliminate this deadness. To eliminate
deadness along this path x = a b must be sunk
past node 6 to make it fully dead which is prevented
by lack of free resource. Based upon the above analysis
the path that benefit's from sinking past node 7 is
10-8-7-3-1 while the paths along which cost of an additional
instruction is introduced are 10-9-7-6-4-2-1 and
10-9-7-6-5-2-1. Let us assume that the execution frequency
of path that benefits is greater than the sum of
the frequencies of the two paths that experience additional
cost. In this case predication based sinking will
be enabled at node 7. The modified sinking frame-work
will allow sinking past node 7 resulting in the
code placement shown in Figure 1c.
3.2 Path Profile Directed PRE
The resource availability analysis described in section
2.2 remains unchanged and must be performed
first. Next cost-benefit analysis that incorporates resource
availability information and uses path profiles
is performed. The results of this analysis are used
to enable speculation based hoisting at selected split
points in the next phase. Finally an extension of the
hoisting framework presented in section 2.2 is used to
perform resource-sensitive, profile-guided PRE.
The cost-benefit analysis consists of three steps: (a)
Anticipatability analysis identifies paths originating at
a node along which an expression is anticipatable and
thus can be hoisted, i.e., its hoisting is not blocked by
data dependences or lack of resources needed to execute
the expression; (b) Optimizability analysis identifies
paths leading to a node along which an expression
can be optimized because a prior evaluation of
the expression exists along these paths and the values
of the variables used by the expression have not
been modified since the computation of the expres-
sion; and (c) Cost-benefit computation identifies the
paths through a split point along which additional optimization
is achieved or an additional instruction is
introduced when speculation based hoisting is enabled
at the split point. By summing the frequencies of the
respective paths, provided by path profiles, the values
of cost and benefit are obtained.
Due to space limitations we omit the detailed
data flow equations of the first two steps of cost-benefit
analysis which compute sets of paths
OPS e (n)). However, the principles used in their computation
analogous to those used in section 3.1.
The cost of enabling speculation of a partially redundant
expression e to allow its movement above a
conditional (split point) n is determined by identifying
paths through the conditional along which e is
executed in the optimized program but not executed
in the unoptimized program. Furthermore, the resource
availability analysis indicates that the required
resource is available to allow the placement of e along
the path. The sum of the execution frequencies of
the above paths, as indicated by path profiles, is the
cost. The benefit of enabling speculation of a partially
redundant expression e to allow its movement above
a conditional (split point) n is determined by identifying
paths through the conditional along which a
redundant execution of e is eliminated. Furthermore
the hoisting required to remove the redundant execution
of e from these paths is not inhibited due to lack
of resources. The sum of the execution frequencies of
the above paths, as indicated by path profiles, is the
benefit.
The incorporation of speculation in the partial redundancy
framework of section 2.2 is carried out as
follows. The results of cost-benefit analysis are incorporated
into a code hoisting framework in which
speculation of an expression is enabled with respect
to the conditionals only if resources are available and
the benefit of speculation enabled hoisting is determined
to be greater than the cost of speculation enabled
hoisting. The equations for enabling speculation
are quite similar to those for enabling predication.
The modification of down-safety analysis of section 2.2
as follows. At a split point if the speculative hoisting
of an expression is enabled then as long as the expression
is anticipatable along some path (as opposed
to all paths in section 2.2), it is allowed to propagate
above the split point.
Consider the paths that contribute to cost and benefit
of hoisting expression x+y in node 8 past the split
node 7 in the flow graph of Figure 2a. Anticipatabil-
ity analysis will determine that the paths ending with
the subpath 7-8-10 are the ones along which expression
x+y is anticipatable for hoisting at split node 7. Opti-
mizability analysis will determine that the paths that
start with the subpath 1-3-7 are optimizable while the
paths that start with 1-2-4-6-7 and 1-2-5-6-7 are unop-
timizable. Although x + y is evaluated along the sub-path
1-2-4-6-7, the lack of resources inhibits hoisting
necessary to take advantage of this evaluation in eliminating
redundancy. To eliminate redundancy along
this path x+ y must be hoisted above node 6 to make
it fully redundant which is prevented by lack of free
resource. Based upon the above analysis the path that
benefits from hoisting above node 7 is 1-3-7-8-10 while
the paths along which cost of an additional instruction
is introduced are 1-2-4-6-7-9-10 and 1-2-5-6-7-9-
10. Let us assume that the execution frequency of
path that benefits is greater than the sum of the frequencies
of the two paths that experience additional
cost. In this case speculation based hoisting will be
enabled at node 7. The modified hoisting framework
will allow hoisting above node 7 resulting in the code
placement shown in Figure 2c.
3.3 Cost of Profile Guided Optimization
An important component of the cost of the analysis
described in the preceding sections depends upon the
number of paths which are being considered during
cost-benefit analysis. In general the number of static
paths through a program can be in the millions. How-
ever, in practice the number of paths that need to be
considered by the cost-benefit analysis is quite small.
This is because first only the paths with non-zero execution
counts need to be considered. Second only the
paths through a given function are considered at any
one time.
In
Figure
5 the characteristics of path profiles for
the SPEC95 integer benchmarks are shown. The bar
graph shows that in 65% of the functions that were executed
no more than 5 paths with non-zero frequency
were found and only 1.4% of functions had over 100
paths. Moreover, no function had greater than 1000
paths with non-zero execution count. One approach
for reducing the number being considered in the analysis
is to include enough paths with non-zero frequency
such that these paths account for the majority of the
execution time of the program. The first table in Figure
5 shows how the number of functions that contain
up to 5, 10, 50, 100, and 1000 paths with non-zero
frequency changes as we consider enough paths to account
for 100%, 95% and 80% of the program execution
time. As we can see the number of functions
that require at most 5 paths increases substantially
(from 1694 to 2304) while the number of functions
that require over hundred paths reduces significantly
(from 35 to 1). The second table shows the maximum
number of paths considered among all the functions.
Again this maximumvalue reduces sharply (from 1000
to 103) as the paths conisdered account for less than
100% of the program execution time. In [10, 9] we illustrate
how the solution to cost-benefit analysis that
we described earlier can be easily adapted to the situation
in which only subset of paths with non-zero
frequency are considered.
Concluding Remarks
In this paper we presented a strategy for PRE
and PDE code optimizations that results in synergy
between code placements found during optimization
and instruction scheduling by considering the presence
during selection of code placement
points. In addition, the optimization driven by code
hoisting and sinking also takes advantage of speculation
and predication which till now has only been
performed during instruction scheduling. Finally, our
data flow algorithms drive the application of speculation
and predication based upon path profiling in-
formation. This allows us to trade-off the quality of
code in favor of frequently executed paths at the cost
of sacrificing the code quality along infrequently executed
paths. The techniques we have described can
also be adapted for application to other optimizations
such as elimination of partially redundant loads and
partially dead stores from loops [3, 8]. We are extending
our algorithms to consider register pressure during
optimization.
Number of Paths with Non-Zero Execution Frequency
Number
of
Functions
64.9%
8.6%
4% 1.4%
Number Number of Functions
of Paths 100% 95% 80%
1-5 1694 2022 2304
Total Max. #
Exe. Time of Paths
100 1000
Figure
5: Characteristics of Path Profiles for SPEC95
Integer Benchmarks.
--R
"Efficient Path Profiling,"
"Partial Dead Code Elimination using Slicing Transformations,"
"Array Data-Flow Analysis for Load-Store Optimizations in Superscalar Architec- tures,"
"Using Profile Information to Assist Classic Code Optimiza- tion,"
"Practical Adaptation of Global Optimization Algorithm of Morel and Renvoise,"
"VLIW Compilation Techniques in a Superscalar Environment,"
"Trace Scheduling: A Technique for Global Microcode Compaction,"
"Code Optimization as a Side Effect of Instruction Scheduling,"
"Path Profile Guided Partial Dead Code Elimination Using Predi- cation,"
"Path Profile Guided Partial Redundancy Elimination Using Spec- ulation,"
"Region Scheduling: An Approach for Detecting and Redistributing Paral- lelism,"
"The Superblock: An Effective Technique for VLIW and Superscalar Compilation,"
"Highly Concurrent Scalar Processing,"
"HPL PlayDoh Architecture Specification: Version 1.0,"
"Global Optimization by Suppression of Partial Redundancies,"
"Partial Dead Code Elimination,"
"Lazy Code Motion,"
"Sentinel Scheduling for VLIW and Superscalar Pro- cessors,"
"Data Flow Frequency Analysis,"
"Critical Path Reduction for Scalar Processors,"
"Data Flow Analysis as Model Checking,"
--TR
Highly concurrent scalar processing
Region Scheduling
Using profile information to assist classic code optimizations
Lazy code motion
Sentinel scheduling
The superblock
VLIW compilation techniques in a superscalar environment
Partial dead code elimination
Practical adaption of the global optimization algorithm of Morel and Renvoise
Critical path reduction for scalar programs
Data flow frequency analysis
Efficient path profiling
Array data flow analysis for load-store optimizations in fine-grain architectures
Partial dead code elimination using slicing transformations
Global optimization by suppression of partial redundancies
Data Flow Analysis as Model Checking
Path Profile Guided Partial Dead Code Elimination Using Predication
Code Optimization as a Side Effect of Instruction Scheduling
--CTR
J. Adam Butts , Guri Sohi, Dynamic dead-instruction detection and elimination, ACM SIGOPS Operating Systems Review, v.36 n.5, December 2002
Sriraman Tallam , Xiangyu Zhang , Rajiv Gupta, Extending Path Profiling across Loop Backedges and Procedure Boundaries, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.251, March 20-24, 2004, Palo Alto, California
Max Hailperin, Cost-optimal code motion, ACM Transactions on Programming Languages and Systems (TOPLAS), v.20 n.6, p.1297-1322, Nov. 1998
Youtao Zhang , Rajiv Gupta, Timestamped whole program path representation and its applications, ACM SIGPLAN Notices, v.36 n.5, p.180-190, May 2001
Raymond Lo , Fred Chow , Robert Kennedy , Shin-Ming Liu , Peng Tu, Register promotion by sparse partial redundancy elimination of loads and stores, ACM SIGPLAN Notices, v.33 n.5, p.26-37, May 1998
Vikki Tang , Joran Siu , Alexander Vasilevskiy , Marcel Mitran, A framework for reducing instruction scheduling overhead in dynamic compilers, Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research, October 16-19, 2006, Toronto, Ontario, Canada
Mary Lou Soffa, Complete removal of redundant expressions, ACM SIGPLAN Notices, v.33 n.5, p.1-14, May 1998
John Whaley, Partial method compilation using dynamic profile information, ACM SIGPLAN Notices, v.36 n.11, p.166-179, 11/01/2001
Mary Lou Soffa, Load-reuse analysis: design and evaluation, ACM SIGPLAN Notices, v.34 n.5, p.64-76, May 1999
Mary Lou Soffa, Complete removal of redundant expressions, ACM SIGPLAN Notices, v.39 n.4, April 2004 | code optimization;functional unit resources;aggressive code motion;instruction schedulers;data flow algorithms;optimization;data flow analysis;partial dead code elimination;instruction reordering;resource availability;partial redundancy elimination;resource-sensitive profile-directed data flow analysis |
267960 | An approach for exploring code improving transformations. | Although code transformations are routinely applied to improve the performance of programs for both scalar and parallel machines, the properties of code-improving transformations are not well understood. In this article we present a framework that enables the exploration, both analytically and experimentally, of properties of code-improving transformations. The major component of the framework is a specification language, Gospel, for expressing the conditions needed to safely apply a transformation and the actions required to change the code to implement the transformation. The framework includes a technique that facilitates an analytical investigation of code-improving transformations using the Gospel specifications. It also contains a tool, Genesis, that automatically produces a transformer that implements the transformations specified in Gospel. We demonstrate the usefulness of the framework by exploring the enabling and disabling properties of transformations. We first present analytical results on the enabling and disabling properties of a set of code transformations, including both traditional and parallelizing transformations, and then describe experimental results showing the types of transformations and the enabling and disabling interactions actually found in a set of programs. | Introduction
Although code improving transformations have been applied by compilers for many years,
the properties of these transformations are not well understood. It is widely recognized that the
place in the program code where a transformation is applied, the order of applying code
transformations, and the selection of the particular code transformation to apply can have an impact
on the quality of code produced. Although concentrated research efforts have been devoted to the
development of particular code improving transformations, the properties of the transformations
have not been adequately identified or studied. This is due in part to the informal methods used to
describe code improving transformations. Because of the lack of common formal language or
notation, it is difficult to identify properties of code transformations, to compare transformations
and to determine how transformations interact with one another.
By identifying various properties of code improving transformations, such as their
interactions, costs, expected benefits, and application frequencies, informed decisions can be made
as to what transformation to apply, where to apply them, and in which order to apply them. The
order of application is important to the quality of code as transformations can interact with one
another by creating or destroying the potential for further code improving transformations. For
* This work was partially supported by NSF under grant CCR-9407061 to Slippery Rock University and by CCR-
9109089 to the University of Pittsburgh.
* Dr. Whitfield's address is Department of Computer Science, Slippery Rock University, Slippery Rock, PA 16057
example, the quality of code produced would be negatively affected if the potential for applying a
beneficial transformation was destroyed by the application of a less beneficial transformation.
Certain types of transformations may be beneficial for one architecture but not for another. The
benefits of a transformation can also be dependent on the type of scheduler (dynamic or static) that
is used. 15
One approach that can be taken to determine the most appropriate transformations and the
order of application for a set of programs is to implement a code transformer program (optimizer)
that includes a number of code improving transformations, apply the transformations to the
programs, and then evaluate the performance of the transformed code. However, actually
implementing such a code transforming tool can be a time consuming process, especially when the
detection of complex conditions and global control and data dependency information is required.
Also, because of the ad hoc manner in which such code transformers are usually developed, the
addition of other transformations or even the deletion of transformations may necessitate a
substantial effort to change the transformer. Another approach is to modify an existing optimizer.
However, optimizing compilers are often quite large (e.g., SUIF 12 is about 300,000 lines of C++
code and the GNU C compiler 8 is over 200,000 lines of code) and complex, making it difficult to
use them in experiments that take into account the various factors influencing the performance of
the transformed code.
In this paper, we present a framework for exploring properties of code improving
transformations. The major component of the framework is a code transformation specification
language, Gospel. The framework includes a technique that utilizes the specifications to
analytically investigate the properties of transformations. Gospel is also used in the design of
Genesis, a tool that automatically produces a code transformer program from the specifications,
enabling experimentation. A specification for a transformation consists of expressing the conditions
in the program code that must exist before the transformation can be safely applied and the actions
needed to actually implement the transformation in the program code. The specification uses a
variant of first order logic and includes the expression of code patterns and global data and control
dependencies required before applying the transformation. The actions are expressed using
primitive operations that modify the code. The code improving transformations that can be
expressed in Gospel are those that do not require a fix-point computation. This class includes many
of the traditional and parallelizing code improving transformations.
We demonstrate how the framework can be used to study the phase ordering problem of
transformations by exploring the enabling and disabling properties of transformations. Using
Gospel, we first show that enabling and disabling properties can be established analytically. We
also demonstrate through the use of Genesis that these properties can be studied experimentally.
Using Genesis, code transformers were automatically produced for a set of transformations
specified in Gospel, and then executed to transform a test suite of programs. We present results on
experiments that explored the kinds of transformations found in the test suite and the types and
numbers of transformation interactions that were found.
A number of benefits accrue from such a framework. Guidelines suggesting an application
order for a set of code improving transformations can be derived from both the analytical and
experimental exploration of the interactions. Also, a new transformation can be specified in Gospel
and its relationship to other transformations analytically and experimentally investigated. From the
specifications, a transformer can be generated by Genesis and using sample source programs, the
user can experimentally investigate transformations on the system under consideration. The
decision as to which transformations to include for a particular architecture and the order in which
these transformations should be applied can be easily explored. New transformations that are
particularly tailored to an architecture can be specified and used to generate a transformer. The
effectiveness of the transformations can be experimentally determined using the architecture.
Transformations that are not effective can be removed from consideration, and a new
transformation can be added by simply changing the specifications and rerunning Genesis,
producing a program (transformer) that implements the new transformation. Transformations that
can safely be combined could also be investigated analytically and the need to combine them can
be explored experimentally. Another use of Gospel and Genesis is as a teaching tool. Students can
write specifications of existing transformations, their own transformations, or can modify and tune
transformations. Implementations of these transformations can be generated by Genesis, enabling
experimentation with the transformations.
Prior research has been reported on tools that assist in the implementation of code improving
transformations, including the analysis needed. Research has been performed on automatic code
generation useful in the development of peephole transformers. 4,6,7,9 In these works, the
transformations considered are localized and require no global data flow information. A number of
tools have been designed that can generate analyses. Sharlit 13 and PAG 1 use lattice-based
specifications to generate global data-flow analyses. SPARE is another tool that facilitates the
development of program analysis algorithms 14 . This tool supports a high level specification
language through which analysis algorithms are expressed. The denotational nature of the
specifications enables automatic implementation as well as verification of the algorithms. A
software architecture useful for the rapid prototyping of data flow analyzers has also recently been
presented. 5
Only a few approaches have been developed that integrate analysis and code transformations,
which our approach does. A technique to combine specific transformations by creating a
transformation template that fully describes the combined operations was developed as part of the
framework for iteration-reordering loop transformations. 11 New transformations may be added to
the framework by specifying new rules. This work is applied only to iteration-reordering execution
order of loop interactions in a perfect (tight) loop nest and does not provide a technique to specify
or characterize transformations in general.
The next section of this paper discusses the framework developed to specify transformations.
Section 3 presents details of the Gospel language. Section 4 shows how Gospel can be used in the
analytical investigation of the enabling and disabling conditions of transformations, and in the
automatic generation of transformers. Section 5 demonstrates the utility of the specification
technique using Genesis and presents experimental results. Conclusions are presented in Section 6.
2. Overview of the Transformation Framework
The code improving transformation framework, shown in Figure 1, has three components:
Gospel, a code transformation specification language; an analytical technique that uses Gospel
specifications to facilitate formal proofs of transformation properties; and Genesis, a tool that uses
the Gospel specifications to produce a program that implements the application of transformations.
These three components are used to explore transformations and their properties. In this paper, we
use the framework to explore disabling and enabling properties.
A Gospel specification consists of the preconditions needed in program code in order for a
transformation to be applicable, and the code modifications that implement the transformation. Part
of the precondition specification is the textual code pattern needed for a transformation. An
example includes the existence of a statement that assigns a variable to a constant or the existence
of a nested loop. Thus, the code patterns operate on program objects, such as loops, statements,
expressions, operators and operands.
In order to determine whether it is safe to apply a transformation, certain data and control
dependencies may also be needed. Program objects are also used to express these dependence
relationships. In describing transformations, Gospel uses dependencies expressed in terms of flow,
anti, output, and control dependencies. 21 These dependencies are quantified and combined using
logical operators to produce complex data and control conditions. A flow dependence (S i d S j ) is a
dependence between a statement that defines a variable and a statement S j that uses the definition
in S i . An anti-dependence (S i d - S j ) exists between statement S i that uses a variable that is then
defined in statement S j . An output dependence (S i d dependence between a statement
S i that defines (or writes) a variable that is later defined (or written) by S j . A control dependence
exists between a control statement S i and all of the statements S j under its control. The
concept of data direction vectors for both forward and backward loop-carried dependencies of array
elements is also needed in transformations for parallelization. 10 Each element of the data
dependence vector consists of either a forward, backward, or equivalent direction represented by <,
>, or =, respectively. These directions can be combined into >=, <=, and *, with * meaning any
direction. The number of elements in the direction vector corresponds to the loop nesting level of
the statements involved in the dependence.
In some cases, code improving transformations have been traditionally expressed using global
data flow information. This information can either be expressed as a combination of the data and
control dependencies 21 or can be introduced in Gospel as a relationship that needs to be computed
and checked. The underlying assumption of Gospel is that any algorithm needed to compute the
data flow or data dependency information is available. Thus, Gospel uses basic control and data
dependency information with the possibility of extensions to other types of data flow information.
It should be noted that in the more than twenty transformations studied in this research, all data flow
information was expressed in terms of combinations of data and control dependencies. 16, 17 A
sample of transformation specifications is given in Appendix B.
Gospel also includes the specification of the code modifications needed to implement a
transformation. Although code improving transformations can produce complex code
modifications, the code changes are expressed in Gospel by primitive operations that can be applied
in combinations to specify complex actions. These operations are applied to code objects such as
statements, expressions, operands and operations. Using primitive operations to express code
modifications provides the flexibility to specify a wide range of code modifications easily.
Another component of the framework is an analytical technique useful for proving properties
of transformations. The technique uses the specification from Gospel to provide a clear, concise
description of a transformation useful in analysis. We show how this component was used in
establishing the enabling and disabling properties of a set of transformations.
The last component of the framework is Genesis, a tool that generates a program that
implements transformations from the Gospel specification of those transformations. Thus, the
generated program contains code that will check that conditions needed for the safe application of
a transformation are satisfied and also contains code that will perform the code modifications as
expressed in the Gospel specification. A program to be transformed is then input into the program
generated by Genesis and the output produced is the program transformed by the specified
transformations. A run-time interface is provided that either permits the user to select the type and
place of application of a transformation, or it automatically finds all applicable transformations at
all points. We demonstrate the utility of Genesis in determining the kinds and frequencies of
transformations occurring in a number of programs, and the types and frequencies of enabling and
disabling interactions.
Figure
1 presents the code improving framework and uses of the framework. The three
components of the framework are shown in the box and some applications of the framework are
shown in ovals. Solid lines connect the framework with the applications that are described in this
paper. A solid line connects the framework to the interaction prover used to establish enabling and
disabling properties of transformations. There is another solid line between the framework and the
experimental studies of enabling and disabling properties. The dotted line connecting the
framework and the combining transformations represents a potential use of the framework yet to be
fully explored.
_
Code Improving Transformation Framework Uses
Figure
1. Components and Utilization of the Transformation Framework
________________________________________________________________________
3. Description of the Gospel Language
Gospel is a declarative specification language capable of specifying a class of transformations that
can be performed without using fix-point computation. We have specified over twenty
transformations using Gospel, including specifications for invariant code motion, loop fusion,
induction variable elimination, constant propagation, copy propagation and loop unrolling.
Transformations that do require fix-point computation such as partial dead code elimination and
partial redundancy elimination cannot be specified. Likewise, although Gospel can be used to
specify a type of constant propagation and folding, it cannot be used, for example, to specify
constant propagation transformations requiring fixed point computation. However, studies have
shown that code seldom contains the types of optimizations needing iteration. 3 A BNF grammar for
a section of Gospel appears in Appendix A. The grammar is used to construct well-formed
specifications and also used in the implementation of the Genesis transformer.
In this paper, we assume the general form of statements in a program to be transformed is
three address code extended to include loop headers and array references. However, Gospel and
Genesis can be adapted to handle other representations including source level representation. We
assume that a basic three address code statement has the form:
The three address code retains the loop headers and array references from the source program,
which enables the user to specify loop level transformations and array transformations.
The template for a specification of a transformation consists of a Name that is used to identify
the particular code improving transformation followed by three major specification sections
identified by keywords: DECLARATION, PRECONDITION and ACTION. The
PRECONDITION section is decomposed into two sections, Code_Pattern and Depend. The overall
design of a Gospel specification follows.
Combining
Genesis
Proof
Technique
Enabling &
Disabling
Interaction Properties
Experimental
Study of
Enabling & Disabling
Transformations
Gospel
DECLARATION
PRECONDITION
Code_Pattern
Depend
ACTION
The DECLARATION section is used to declare variables whose values are code objects of
interest (e.g., loop, statement). Code objects have attributes as appropriate such as a head for a loop
and position for an operand. The PRECONDITION section contains a description of the code
pattern and data and control dependence conditions, and the ACTION section consists of
combinations of primitive operations to perform the transformation.
Figure
presents a Gospel specification of a Constant Propagation (CTP) transformation (See
Section 3.2 for details). The specification uses three variables S i , S j and S l whose values are
statements. The Code_Pattern section specifies the code pattern consisting of any statement
that defines a constant type (S i .opr 2 ) == const. S i will have as its value such
a statement if it exists. In the Depend section, S j is used to determine which statement uses the
constant. The pos attribute records the operand position (first, second or third) of the flow
dependence between S i and S j . The second statement with S l ensures that there are no other
definitions of the constant assignment that might reach S j . Again, the pos attribute records the
position of the flow dependence between S j and S l . The S j != S l specification indicates that the two
statements are not the same statement and the operand (S j , pos) != operand (S l , pos) specification
ensures that the dependence position recorded in S j does not involve the same variables as the
dependence found in S 1 .
_______________________________________________________________________
DECLARATION
PRECONDITION
Code_Pattern Find a constant definition
any S i
Depend Use of S i with no other definitions
any (S j , pos): flow_dep (S i , S j , (=));
no (S l , pos): flow_dep (S l ,
AND operand (S i , pos) != operand (S l , pos);
ACTION Change use in S i to be constant
modify (operand (S j , pos), S i .opr 2 );
Figure
2. Gospel Specification of Constant Propagation
If a S j is found that meets the requirements and no S l 's are found that meet the specified
requirements, then the operation expressed in the ACTION section is performed. The action is to
modify the use at S j to be the constant found as the second operand of S i .
Next consider the specification of the parallelizing transformation Loop Circulation (CRC)
found in Figure 3 that defines two statements and three tightly (perfect) nested loops, which are
loops without any statements occurring between the headers. In the Code_Pattern section, any
specifies an occurrence of tightly nested loops L 1 , L 2 , and L 3 . The data dependence conditions in
the Depend section first ensure that the loops are tightly nested by specifying no flow dependences
between loop headers. Next, the Depend section expresses that there are no pairs of statements in
the loop with a flow dependence and a (<,>) direction vector. If no such statements are found then
the Heads and Ends of the loops are interchanged as specified in the ACTION section.
The next section provides more details about the Gospel language.
3.1. Gospel Types and Operations
Variables, whose values are code elements, are defined in the declaration section and have the
DECLARATION: id_list.
Variables are declared to be one of the following types: Statement, Loop, Nested loops, Tight
loops, or Adjacent loops. Thus, objects of these types have as their value a pointer to a statement,
loop, nested loop, tight loop or adjacent loop, respectively. All types have pre-defined attributes
denoting relevant properties, such as next (nxt) or previous (prev). The usual numeric constants
(integer and real) are available in Gospel specifications. Besides these constants, two classifications
of pre-defined constants are also available: operand types and opcode values. These constants
________________________________________________________________________
DECLARATION
PRECONDITION
Code_Pattern Find Tightly nested loops
any (L 1 ,
Depend Ensure perfect nesting, no flow_dep with<,>
no
no
ACTION Interchange the loops
move
Figure
3. Gospel Specification of Loop Circulation
reflect the constant values of the code elements that are specified in Gospel. Examples of constants
include const for a constant operand and var for a variable operand. Typical mathematical opcodes
as well as branches and labels can appear in the specification code. Gospel can be extended to
include other op codes and variable types by changing the grammar and any tools, such as Genesis,
that uses the grammar.
A variable of type Statement can have as its value any of the statements in the program and
possesses attributes indicating the first, second and third operand (opr 1 , opr 2 , and opr 3 ,
respectively) and the operation (opcode). Additionally a pos attribute exists to maintain the operand
position of a dependence required in the Depend section. ALoop typed variable points to the header
of the loop, and has as attributes Body, which identifies all the statements in the loop and Head,
which defines Lcv, the loop control variable, Init, the initial value and Final, the last value of the
loop control variable. The End of the loop is also an attribute. Thus, a typical loop structure, with
its attributes is:
Head {L.Head defines L.Init, L.Final, and L.Lcv}
Loop_body {L.Body}
End_of_Loop {L.End}
Nested loops, Tight loops, and Adjacent loops are composite objects whose components are
of type Loop. Nested loops are defined as two (or more) loops where the second named loop
appears lexically within the first named loop. Tight loops restrict nested loops by ensuring that
there are no statements between loop headers. Adjacent loops are nested loops without statements
between the end of one loop and the header of the next loop.
The id_list after the keyword DECLARATION is either a simple list (e.g., statement and loop
identifiers) or a list of pairs (e.g., identifiers for a pair of nested, adjacent or tight loops). For
example, Tight: (Loop_One, Loop_Two) defines a loop structure consisting of two tightly nested
loops.
3.2. The Gospel Precondition Section
In order to specify a code improving transformation and conditions under which it can be
safely applied, the pattern of code and the data and control dependence conditions that are needed
must be expressed. These two components constitute the precondition section of a specification.
The keyword PRECONDITION is followed by the keywords Code_Pattern, which identifies the
code pattern specifications, and Depend which identifies the dependence specification.
Code Pattern Specification
The code pattern section specifies the format of the statements and loops involved in the
transformation. The code pattern specification consists of a quantifier followed by the elements
needed and the required format of the elements.
quantifier element_list: format_of _elements;
The quantifier operators can be one of any, all or no with the following meanings:
all - returns a set of all the elements of the requested types for a successful match
any - returns a set of one element of the requested type if a match is successful
no - returns a null set if the requested match is successful
For example, the quantifier element list any (S j ) returns a pointer to some statement S j.
The second part of the code pattern specification format_of_elements describes the format of
the elements required. If Statement is the element type, then format-of-elements restricts the
statement's operands and operator. Similarly, if Loop is the element type, format-of-elements
restricts the loop attributes. Thus, if constants are required as operands or if loops are required to
start at iteration 1, this requirement is specified in the format_of_elements. An example code pattern
specification which specifies that the final iteration count is greater than the initial value is:
any Loop: Loop.Final - Loop.Init > 0
Expressions can be constructed in format_of_elements using the and and or operators with their
usual meaning. Also, restrictions can be placed on either the type of an operand (i.e., const, or var)
or the position, pos, of the opcode as seen in the Code_Pattern section of Figure 2.
Depend Specification
The second component of the PRECONDITION section is the Depend section, which specifies the
required data or control dependencies of the transformation. The dependence specification consists
of expressions quantified by any, no, or all that return both a Boolean truth value and the set of
elements that meet the conditions. If the pos attribute is used, then the operand position of the
dependence is also returned. The general form of the dependence specification is:
quantifier element: sets_of_elements, dependence_conditions
The sets_of_elements component permits specifying set membership of elements; mem(Element,
specifies that Element is a member of the defined Set. Set can be described using predefined
sets, the name of a specific set, or an expression involving set operations and set functions such as
union and intersection. The dependence_conditions clause describes the data and control
dependencies of the code elements and takes the form:
type_of_dependence (StmtId, StmtId, Direction).
In this version of Gospel, the dependence type can be either flow dependent (flow_dep), anti-
dependent (anti_dep), output dependent (out_dep), or control dependent (ctrl_dep). Direction is
a description of the direction vector, where each element of the vector consists of either a forward,
backward or equivalent direction (represented with <, >, =, respectively; also <= and >=, can be
used), or any, which allows any direction. Direction vectors are needed to specify loop-carried
dependencies of array elements for parallelizing transformations. This direction vector may be
omitted if loop-carried dependencies are not relevant.
As an example, the following specification is for one element named S i that is an element of
Loop 1 such that there is a S j , an element of Loop 2 , and there is either a flow dependence or an anti-
dependence between S i and S j .
any
3.3. The Gospel Action Section
We decompose the code modification effects of applying transformations into a sequence of
five primitive operations, the semantics of which are indicated in Table 1. These operations are
overloaded in that they can apply to different types of code elements. The five primitive operations,
their parameters and semantics are:
Table
1. Action Operations
An example of a Move operation that moves Loop_1 header after Loop_2 header is:
move(Loop_1.Head, Loop_2.Head).
An example of a modify action that modifies the end of Loop_2 to jump to the header of Loop_2 is:
modify(Loop_2.End, address (Loop_2.Head)).
These primitive operations are combined to fully describe the actions of a transformation. It may
be necessary to repeat some actions for statements found in the PRECONDITION section. Hence,
a list of actions may be preceded by forall and an expression describing the elements to which the
actions should be applied.
The flow of control in a specification is implicit with the exception of the forall construct
available in the action section. In other words, the ACTION keyword acts as a guard that does not
permit entrance into this section unless all conditions have been met.
4. Applications of the Gospel Specification
The Gospel specifications are useful in a number of ways. In this section, we demonstrate the
utilization of the specifications to explore the phase ordering problem of transformations by
Operation Parameter Semantics
Move (Object, After_Object) move Object and place it following After_Object
Add (Obj_Desc, Obj_Name, After_Obj) add Obj_Name with Obj_Desc, place it after_Obj
Delete (Object) delete Object
Copy (Obj, After_Obj, New_Name) copy Obj into New_Name, place it After_Obj
Modify (Object, Object_Description) modify Object with Object_Desc
analytically establishing enabling and disabling properties. In Section 4.2, we show how Gospel is
used to produce an automatic transformer generator, Genesis, which can be used to explore
properties of transformations experimentally.
4.1. Technique to Analyze Specifications
The Gospel specifications can be analyzed to determine properties of transformations, and in
particular, we use the analysis technique for establishing enabling and disabling properties of
transformations. Through the enabling and disabling conditions, the interactions of transformations
that can create conditions and those that can destroy conditions for applying other transformations
are determined. Knowing the interactions that occur among transformations can be useful in
determining when and where to apply transformations. For example, a strategy might be to apply a
transformation that does not destroy conditions for applying another transformation in order to
exploit the potential of the second transformation, especially if the second transformation is
considered to be more beneficial.
4.1.1. Enabling and Disabling Conditions
Enabling interactions occur between two transformations when the application of one
transformation creates the conditions for the application of another transformation that previously
could not be applied. Disabling interactions occur when one transformation invalidates conditions
that exist for applying another transformation. In other words, transformation A enables
transformation B (denoted A - B) if before A is performed, B is not applicable, but after A is
performed, B can now be applied (B's pre-condition is now true). Similarly, transformation A
disables transformation B (denoted A B) if the pre-conditions for both transformation A and B
are true, but once A is applied, B's pre-condition becomes false. These properties are involved in
the phase ordering problem of transformations.
Before determining the interactions among transformations, the conditions for enabling and
disabling each transformation must be established. The enabling and disabling conditions are found
by analyzing the PRECONDITION specifications of the transformations. For each condition in the
Code_Pattern and Depend section of a transformation, at least one enabling/disabling condition is
produced. For example, if a code pattern includes:
any Statement: Statement.opcode == assign
then the enabling condition is the creation of a statement with the opcode of assign, or the disabling
conditions are the deletion of such a statement or the modification of the statement's opcode. The
enabling and disabling conditions of six transformations derived from their specifications (see
appendix B for their Gospel specifications) are given in Table 2.
4.1.2 Interactions Among Transformations
Using the Gospel specifications, we can prove the non-existence of interactions. We also use
the specifications in developing examples that demonstrate the existence of interactions. Such an
Transformation Enabling Conditions Disabling Conditions
Dead
Code
Elimination
(DCE)
1. Create S i that is not used
2. Non-existence of S l with (S i d= S l )
Delete S l ,
Path is deleted*
1. Destroy S i that is not used
2. Existence of S l with (S i d= S l )
Introduce S l that uses value
computed by S i
Constant
Propagation
(CTP)
1. Create S i
2. Insert S j such that (S i d=
3. Non-existence of S l with (S l d=
Modify S l so that S l == S i ,
Destroy (S l d= S j
a) Introduce a definition*,
b) Delete S l *,
c) Path is deleted*
1. Destroy S i
2. Non-existence of S j with (S i d=
3. Existence of S l with (S l d=
Modify S l so that S l - S i ,
Create (S l d= S j
a) Definition is deleted*,
b) Introduce S l , where S l - S i
c) Path from S l to S j is created
Constant Folding
(CFO)
1.Create S i of the form CONST
opcode CONST
1. Remove or Modify S i
Loop Unrolling
1. Create DO Loop, L 1. Destroy DO loop, L
Loop
Fusion
1. Existence of 2 adjacent loops: Add
a loop
2. Two DO loops have identical head-
ers: Modify a header
3. The non-existence of S n and S m
with a backward dependence before
a forward:
Remove S n or S m
Add definition between S n
Delete path between S n and S m
4. Non existence of (S i d=
Remove
Remove S i *
Add def., destroying depend*
Delete path between S i and S j *
1.Existence of 2 non-adjacent loops:
Add a loop
2.Two DO loops do not have identical
headers: Modify a header
3.The existence of S n and S m with a
backward dependence before a forward
Insert S n or S m
Delete definition between S n
Create path between S n and S m
4. Existence of (S i d=
Delete a def., so dependence holds*
Create path between S i and S j
Loop
Interchanging
1. Existence of 2 nested DO loops:
Add a loop
2. Non-existence of S n , S m with a
(<,>) dependence:
Remove S n * or S m
Add definition between S n
Delete path between S n and S m
3. Loop headers are invariant: Modify
a header
1.Non-Existence of 2 nested DO
loops: Remove a loop
2. Existence of S n , S m with a (<,>) dependence
Insert S n or S m
Remove def. between S n
Create path between S n and S m
3.Loop headers vary with respect to
each other: Modify a header
* denotes condition is not possible in correct specifications (i.e., maintains semantic equivalence)
Table
2. Enabling and Disabling Conditions
example of an interaction is given in Figure 4, where Loop Fusion (FUS) enables Loop Interchange
(INX). The two inner loops on J are fused into one larger loop, which can then be interchanged.
Sometimes the interaction between two transformations is more complex in that a
transformation can both enable and disable a transformation. Invariant Code Motion (ICM) and
Loop Interchange (INX) are two such transformations, as shown in Figure 5. ICM enables INX and
also can disable INX. In Figure 5 (a), an example of ICM enabling INX is given and in Figure 5 (b)
an example of ICM disabling INX is shown.
For ease in proving the non-interaction, we use a formal notation of the Gospel specifications
that is directly derived from the specification language by using mathematical symbols in place of
the language related words. A comparison of the two styles is exemplified by:
Language: no S m ,
Figure
4. Loop Fusion Enables Loop Interchanging
_______________________________________________________________________
_______________________________________________________________________
_______________________________________________________________________
Figure
5: Enabling and Disabling Transformations
(a) ICM enables INX
(b) ICM disables INX
The following claim and proof illustrate the technique to prove non-existence of enabling and
disabling interactions between transformations. The claim is that loop interchange (INX) cannot
disable the application of constant propagation (CTP). The proof utilizes the disabling conditions
for CTP as given previously in Table 2.
does not disable constant propagation.)
Proof: Assume that INX CTP. For INX to disable CTP, both INX and CTP must be applicable
before INX is applied.
For INX to be applicable, there must be two tightly nested loops, L 1 and L 2 where the loop
limits are invariant and there is no data dependence with a (<,>) direction vector.
For CTP to be applicable, there must exist a S i that defines a constant and S j which uses
the constant value such that (S i l such that S l
Since CTP is applicable, INX must alter the state of the code to disable CTP. The three
disabling conditions for CTP given in Table 2, produce the following cases:
Case 1: Destroy S i which defines the constant
INX does not delete any statements, but does move a header, L 2 . S i defines a variable and a
loop header only defines the loop control variable. If the loop control variable and the variable
defined in S i were the same, then CTP is not applicable because S i does not define a constant
value. \ INX does not destroy S i , the statement defining the constant.
Case 2: The non-existence of S j or the removal of the dependence (S i
INX does not delete any statements but does move a header, L 2 . However, moving the header
to the outside of the loop would not destroy the relationship (S i since the headers must
be invariant relative to each other in order for INX to be applicable. \ INX does not destroy S j .
Case 3: The creation of S l such that (S l d
INX does not create or modify a statement. So there are three ways for INX to create the
condition
could delete a definition S i , but this is not a legal action for this transformation.
could introduce S l . INX does not create any statements, but it does move a header. S l
could not be the header because S l defines a constant.
creates a path so that S l reaches S j . S j could be the header, but the definition in S l would
have reached S j prior to INX since the headers must be invariant. \ INX does not create S l .
Thus, we show that INX - CTP.-
That is, loop interchange when applied will not destroy any opportunities for constant propagation.
By exploring examples of interactions and developing proofs for non-interaction, we derived
(by hand) an interaction table that displays the potential occurrence of interactions. Table 3 displays
interactions for eight transformations: Dead Code Elimination (DCE), Constant Propagation(CTP),
Copy Propagation (CPP), Constant Folding (CFO), Invariant Code Motion (ICM), Loop Unrolling
(LUR), Loop Fusion (FUS), and Loop Interchange (INX). Each entry in the table consists of two
elements separated by a slash (/). The first element indicates the enabling relationship between the
transformation labeling the row and the transformation labeling the column, and the second element
is the disabling relationship. A "-" indicates that the interaction does not occur, whereas an "E" or
"D" indicates that an enabling or disabling interaction occurs, respectively. As an example, the first
row indicates that DCE enables DCE and disables CTP. Notice the high degree of potential
interactions among the triples <FUS, INX, and LUR>, and <CTP, CFO, and LUR>.
4.1.3 Impact of the Interactions on Transformation Ordering
The disabling and enabling relationships between transformations can be used when
transformations are applied automatically or when transformations are applied interactively. When
transformations are applied automatically, as is the case for optimizing compilers, the interactions
can be used to order the application so as to apply as many transformations as possible. When
applying transformations in an interactive mode, knowledge about the interaction can help the user
determine which transformation to apply first. Using the interaction properties, two rules are used
for a particular ordering, if the goal is to applying as many transformations as possible.
1. If transformation A can enable transformation B, then order A before B - <A,B>.
2. If transformation A can disable transformation B, then order A after B - < B, A>.
These rules cannot produce a definite ordering as conflicts arise when:
1. A - B and B - A
2. A B and B A
3. A B and A - B
CTP E/-/D E/- E/- E/- E/- E/-
LUR E/- E/- E/- E/- E/- E/D E/D
FUS -/D -/D -/D E/D E/D
Table
3. Theoretical Enabling and Disabling Interactions
In these cases, precise orderings cannot be determined from the properties. However, as shown in
the next section, experimentation can be performed using Genesis to determine if there is any value
in applying one transformation before the other transformation.
As an example of using the orderings, consider a scenario where the transformer designer
decides that LUR is an extremely beneficial transformation for the target architecture. The
transformer designer could benefit from two pieces of information: 1) the transformations that
enable LUR, and 2) the transformations that disable LUR. As can be seen in Table 3, CTP, CFO,
and LUR all enable LUR. These interactions indicate that CTP and CFO should be applied prior to
LUR for this architecture. Additionally, one could infer from the table that since CTP enables CFO
and CFO enables CTP, these two transformations should be applied repeatedly before LUR. Of
course, there may be other factors to consider when applying loop unrolling. In this paper, we focus
on only one, namely transformation interactions. Other factors may include the impact that the
unrolled loop has on the cache. When other factors are important in the application of
transformations, these factors could be embedded in Genesis experiments (e.g., by adding measures
of cache performance).
Table
3 also displays the interactions that disable LUR. As FUS is the only transformation
that disables LUR, a decision must be made about the importance of applying FUS on the target
architecture. If LUR is more important, then either FUS should not be applied at all or only at the
end of the transformation process.
The information about interactions could also be used in the development of a transformation
guidance system that informs the user when a transformation has the potential for disabling another
transformation and also informs the user when a transformation has the potential for enabling
another transformation. The interactions among the transformations can also be used to determine
some pairwise orderings of transformations. For instance, Table 3 indicates that when applying CPP
and CTP, CPP should be applied first. Other such information can be gleaned from this table.
4.2 Genesis: An Automatic Transformer Generator Tool
Another use of the framework is the construction of a transformer tool that automatically
produces transformation code for the specified transformations. The Genesis tool analyzes a Gospel
specification and generates code to perform the appropriate pattern matching, check for the required
data dependences, and call the necessary primitive routines to apply the specified transformation. 19
Figure
6 presents a pictorial description of the design of Genesis. The value of Genesis is that it
greatly reduces the programmer's burden by automatically generating code rather than having the
programmer implement the optimizer by hand. In Figure 6, a code transformer is developed from a
generator and constructor.
The generator produces code for the specified transformations, utilizing pre-defined routines
in the transformer library, including routines to compute data and control dependencies. The
constructor packages all of the code produced by the generator, the library routines, and adds an
interface which prompts interaction with the user. The generator section of Genesis analyzes the
Gospel specifications using LEX and YACC, producing the data structures and code for each of the
three major sections of a Gospel specification. The generator first establishes the data structures for
the code elements in the specifications. Code is then generated to find elements of the required
format in the three address code. Code to verify the required data dependences is next generated.
Finally, code is generated for the action statements. The Genesis system is about 6,500 lines of C
code, which does not include the code to compute data dependencies. A high level representation
of the algorithm used in Genesis is given in Figure 7.
The generated code relies on a set of predefined routines found in the transformer library.
These routines are transformation independent and represent routines typically needed to perform
transformations. The library contains pattern matching routines, data dependence computation
algorithms, data dependence verification procedures, and code manipulation routines. The pattern
matching routines search for loops and statements. Once a possible pattern is found, the generated
code is called to verify such items as operands, opcodes, initial and final values of loop control
variables.
When a possible application point is found in the intermediate code, the data dependences
must be verified. Data dependence verification may include a check for the non-existence of a
particular data dependence, a search for all dependences, or a search for one dependence within a
Generator
Constructor
Gospel specifications
of transformations
Library routines
transformed by applying
User options
Transformer
Code transforming system
Code to perform
_______________________________________________________________________
Figure
6. Overview of Genesis
loop or set. The generated code may simply be an "if" to ensure a dependence does not exist or may
be a more complex integration of tests and loops. For example, if all statements dependent on S i
need to be examined, then code is generated to collect the statements. The required direction vectors
associated with each dependence in the specification are matched against the direction vectors of
the dependences that exist in the source program.
If the dependences are verified then the action is executed. Routines consisting of the actions
specified in the ACTION section of the specification are generated for the appropriate code
elements.
The constructor compiles routines from the transformer library and the generated code to
produce the transformer for the set of transformations specified. The constructor also generates an
interface to execute the various transformations. The interface to the transformer reads the source
code, generates the intermediate code and computes the data dependences. The interface also
queries the user for interactive options. This interactive capability permits the user to execute any
________________________________________________________________________
GENESIS()
For {iterate through transformation list}
Read(Gospel specification for Transformation t i )
{Analyze the Gospel specifications using LEX and YACC}
{Gen code to setup data structures}
{Gen code to search for patterns}
gen_code_depend_verify(data_dependences) {Gen code to verify data dependences}
{Gen code to perform primitive actions}
End for
{Create the interface from a template}
Construct_optimizer(generated_code, library_routines)
Read(source_code)
Convert( source, intermediate representation)
While (user_interaction_desired)
Select_transformations
Select_application_points
Compute_data_dependences
Perform_optimization (user's_direction)
EndWhile
Figure
7. The Genesis Algorithm
number of transformations in any order. The user may elect to perform a transformation at one
application point (possibly overriding dependence constraints) or at all possible points in the
program.
4.2.1. Prototype Implementation
In order to test the viability and robustness of this approach, we implemented a prototype for
Genesis and produced a number of transformers. For ease of experimentation, our prototype
produces a transformer for every transformation specified.
For any transformation specified, the generator produces four procedures tailored to a
transformation: set_up_Trans, match_Trans, pre_Trans, and act_Trans. These procedures
correspond to the DECLARATION, Code_Pattern, Depend and ACTION sections in the
specifications.
In our implementation, a transformer consists of a driver that calls the routines that have been
generated specifically for that transformation. Code for the driver is given in Figure 8. The format
of the driver is the same for any transformer generated. The driver calls procedures in the generated
call interface for the specific transformation (set_up_Trans, match_Trans, pre_Trans, and
act_Trans). The call interface in turn calls the generated procedures that implement the
transformation (the generated transformation specific code). For CTP, as given in Figure 9, the
set_up_Trans procedure consists of a single call to set_up_CTP. The driver requires a successful
pattern match from match_CTP and pre_CTP in order to continue. Thus, the match_Trans and
pre_Trans of the call interface procedures return a boolean value.
_______________________________________________________________________
Done := False;
match_success := match(); /* Match the code patterns */
IF (match_success) THEN DO
pre_success := pre_condition(); /* Verify the dependences */
IF (pre_success) THEN DO
Perform actions of the optimization */
Done := True;
END
Figure
8. The Driver Algorithm
Any generated set_up procedure consists of code that initializes data structures for each
element specified using any or all in the PRECONDITION section. A type table data structure,
TypeTable, contains identifying information about each statement or loop variable specified in the
DECLARATION section. The TypeTable holds the identifier string, creates an entry for a
quantifier that may be used with this identifier in the PRECOND section, and maintains the type of
the identifier (e.g., statement, loop, adjacent loop or nested loop). For type Statement, an entry is
initialized with the type and corresponding identifier. If a loop-typed variable is specified,
additional flags for nested or adjacent loops are set in the type table entry. These entries are filled
in as the information relevant to the element is found when the transformations are performed. For
each statement in the DECLARATION section, a call to TypeTable_Insert is generated with the
identifier and the type of the identifier and placed in the set_up procedure. During execution of
CTP, shown in Figure 9, a type table entry is initialized with type "Statement" and identifier S i
when the transformer executes procedure set_up_CTP.
After the set_up_CTP procedure terminates, the driver indirectly initiates an exhaustive
search for the statement recorded in the type table by calling match_CTP. If the source program's
statement does not match, then the transformer driver re-starts the search for a new statement. The
match procedure is generated from the statements in the Code_Pattern section of the Gospel
specification. For each quantified statement in the Code_Pattern section, a call to SetTable_Insert
is made with the identifier, type of identifier, and quantifier. SetTable_Insert searches for the
requested type and initializes the Set_Table data structure with the appropriate attributes for the
type (e.g., for a statement, the opcode and operands are set). Next the restrictions in the
Code_Pattern section are directly translated into conditions of IF statements to determine if the
requested restrictions are met. If the current quantifier is an "all", then a loop is generated to check
all of the objects found by Set_Table. In the CTP example in Figure 9, code is generated that
searches for an assignment statement with a constant on the right hand side.
The next routine is the pre procedure, which is generated from the statements in the Depend
section. For each quantified statement, a call to SetTable_Insert is generated (however, the pattern
matching will not be performed again at run-time.) For the CTP example, the pre_CTP procedure
inserts an element into the Set_Table structure for each dependence condition statement. S j is
inserted into Set_Table and the dependence library routine is called to find the first statement that
is flow dependent on S i ; if no statement is found then the condition fails. S l is also inserted into the
Set_Table and the dependence routine is called again. Each S l such that S l is flow dependent on S j
is examined to determine if the operand of S l causing the dependence is the same variable involved
in the dependence from S i to S j . If such an S l is found then the condition fails. Next, an assignment
statement is generated to assign the "hits" field of the Set_Table data structure with the result of the
requested dependence or membership procedure call. For example, by setting the "hits" field to a
result of a flow_dependence call, the hits field will contain either 1 (for the any quantifier) or many
(for the all quantifier) statement numbers that are flow dependent with the required direction vector.
Next, IF statements are directly generated from any relational conditions that exist in the
specification.
The last procedure to be called is the action procedure. The action procedure is generated from
the statements in the Action section of the Gospel specification. For each individual action, a call
to the primitive transformation is made with the required parameters (e.g., modify requires the
object being modified and the new value). If the Gospel forall construct is used, then a for loop is
________________________________________________________________________
Type
Table
_Insert(Statement, Si); set up type table for statement S i
{
Table
_Insert(Statement, Si, any); classify S i as a set of statements
if (Set
Table
[Si].opcode.kind != ASSGN) then
return (failure); if S j 's opcode is not ASSGN, fail
if (set
Table
[Si].operand_a.kind!= CONST) then
match successful for S i
pre_CTP() {
Table
_Insert(Statement, Sj, all); classify S j as a set of statements
Table
_Insert(Statement, Sl, no); classify S l as a set of statements
Table
find and assign flow dep S j
If(Set
Table
[Si].hits ==NULL) then if flow dep S j does not exist try again
return (failure);
Table
foreach(SetTable[Sl].hits {
quad_numbers and
involved in dependencies
{ modify one of S j 's operands
if(Si.oprc==Sj.orpa) then
modify (Sj.opra, Si.oprc);
else
endif
Figure
9. The Generated Code for CTP
________________________________________________________________________
return (failure);
generated and the calls to the primitive transformations are placed within the loop. In the example
in
Figure
simply modifies the operand collected in S j . This modification occurs in either
the first or second modify statement depending on the operand that carries the dependence. Thus,
the first call to modify considers "operand a" of S j for replacement and the second call considers
"operand b" for replacement, effectively implementing the pattern matching needed for
determining the operand position of a dependence. The procedure act_CTP is called by the driver
only if match_CTP and pre_CTP have terminated successfully. For more implementation details,
the reader is referred to another paper.
5. Experimentation
Using our prototype implementation of Genesis, we performed experiments to demonstrate
that Genesis can be used to explore the properties of transformations including 1) the frequency of
applying transformations, and 2) the interactions that occur among the transformations.
Using Genesis, transformers were produced for ten of the twenty transformations specified:
LUR, and FUS. Experimentation was performed using programs found in the HOMPACK test suite
and in a numerical analysis test suite. 2 A short description and the Gospel specifications of these
transformations are given in Appendix B. HOMPACK consists of FORTRAN programs to solve
non-linear equations by the homotopy method. The numerical analysis test suite included programs
such as the Fast Fourier Transform and programs to solve non-linear equations using Newton's
method. A total of ten programs were used in the experimentation. The benchmark programs were
coded in Fortran, which was the language accepted by our front end. They ranged in size from 110
to 900 lines of intermediate code statements. The programs were numerical in nature and had a
mixture of loop structures, including nested, adjacent and single loops. Both traditional
optimizations and parallelizing transformations could be applied in the programs, as we were
interested in the interaction between these types of transformations. Longer programs would more
likely show more opportunities for transformations and thus more opportunities for interactions.
In order to verify Genesis' capability to find application points, four transformations were
specified in Gospel and run on the HOMPACK test suite. The number of application points for each
of the transformations was recorded and compared to the number of application points found by
Tiny. 20 The comparison revealed that the Genesis found the same number of applications points
that Tiny found. Furthermore, seven optimizations were specified in Gospel and optimizers were
generated by Genesis. The generated optimizers were compared to a hand-coded optimizer to
further verify Genesis' ability to find application points. Again, the optimizers generated by
Genesis found the same application points for optimizations.
In the test programs, CTP was the most frequently applicable transformation (often enabled)
while no application points for ICM were found. It should be noted that the intermediate code did
not include address calculations for array accesses, which may introduce opportunities for ICM.
CTP was also found to create opportunities to apply a number of other transformations, which is to
be expected. Of the total 97 application points for CTP, 13 of these enabled DCE, 5 enabled CFO
and 41 enabled LUR (assuming that constant bounds are needed to unroll the loop). CPP occurred
in only two programs and did not create opportunities for further transformation. These results are
shown in Table 4 where a "-" entry indicates that no interaction is theoretically possible and a
number gives the number of interactions that occurred. For example, the entry for INX/FUS
indicates that 5 enabling interactions were found and 4 disabling interactions were found in the 13
application points.
To investigate the ordering of transformations, we considered the transformations FUS, INX
and LUR which we showed in Section 4 to theoretically enable and disable one another. In one
program, FUS, INX, and LUR were all applicable and heavily interacted with one another by
creating and destroying opportunities for further transformations. For example, applying FUS
disabled INX and applying LUR disabled FUS. Different orderings produced different transformed
programs. The transformations also interacted when all three transformations were applied; when
applying only FUS and INX, one instance of FUS in the program destroyed an opportunity to apply
INX. However, when LUR was applied before FUS and INX, INX was not disabled. Thus, users
should be aware that applying a transformation at some point in the program may prevent another
transformation from being applicable. To further complicate the process of determining the most
beneficial ordering, different parts of the program responded differently to the orderings. In one
segment of the program, INX disabled FUS, while in another segment INX enabled FUS. Thus,
there is not a "right" order of application. The context of the application point is needed. Using the
theoretical results of interactions from the formal specifications of transformations as a guide, the
user may need multiple passes to discover the series of transformations that would be most fruitful
for a given system.
The framework could also be used to explore the value of combining transformations.
Freq DCE CTP CPP CFO ICM LUR FUS INX
LUR
FUS 11 -/5 -/0 -/1 1/0 0/6
Table
4. Enabling and Disabling Interactions
Blocking is a transformation that combines Strip Mining and Interchange. 11 We performed a
preliminary experiment in which we applied various orders of Loop Interchange (INX), Loop
Unrolling (LUR) and Loop Fusion (FUS). In the experiments, LUR when followed by INX
produced more opportunities for transformations than other orders. Thus, after performing
experimentation to examine what happens when a series of transformations are applied, it might be
beneficial to combine certain transformations and apply them as a pair. In our example, we would
consider combining LUR and INX.
6. Concluding Remarks
The code improving transformation framework presented in this paper permits the uniform
specification of code improving transformations. The specifications developed can be used for
analysis and to automatically generate a transformer. The analysis of transformations enables the
examination of properties such as how transformations interact to determine if a transformation
creates or destroys conditions for another transformation. These relationships offer one approach
for determining an order in which to apply transformations to maximize their effects. The
implementation of the Gospel specifications permits the automatic generation of a transformer.
Such an automated method enables the user to experimentally investigate properties by rapidly
creating prototypes of transformers to test their feasibility on a particular machine. Genesis also
permits the user to specify new transformations and quickly implement them.
Future work in this research includes examining the possibility of automatically proving
interactions by expanding the specifications to a more detailed level. Such a transformation
interaction proving tool would enable the user to determine properties of the transformations. Also,
the design of a transformation guidance system prototype is being examined for its feasibility. This
type of system would aide the user in applying transformations by interactively providing
interaction information. The Gospel specifications are also being explored to determine if they can
easily be combined to create more useful transformations.
Acknowledgment
We are especially grateful to TOPLAS Associate Editor Jack Davidson for his insightful criticisms
and advice on earlier drafts of this paper. We also thank the anonymous referees for their helpful
comments and suggestions, which resulted in an improved presentation of the paper.
Appendix
A
PRECONDITION Grammar for the Gospel Prototype
Precon_list
Precon_list - Quantifier Code_list : Mem_list Condition_list ; Precon_list | e
Quantifier - ANY | NO | ALL
Code_list - StmtId StmtId_list
Mem_list - Mem_list OR Mem_list
Mem_list AND Mem_list
Mem - MEM | NO_MEM
Condition_list - NOT Condition_list
Condition_list AND Condition_list
Condition_list Condition_list
- Type (StmtId, StmtId Dir_Vect)
Type - FLOW_DEP | OUT_DEP | ANTI_DEP | CTRL_DEP
Dir_Vect - ( Dir Dir_List ) | e
Gospel Specification of Transformations
Bumping (BMP): Modify the loop iterations by bumping the index by a preset amount (e.g., 2).
DECLARATION
PRECONDITION
Code_Pattern
any L;
Depend
all S: flow_dep (L.Lcv, S, (any));
ACTION
add (S.Prev, ( -, 2, S.opr 1 , S.opr 1
modify (L.Initial, eval(L.Initial, +, 2));
modify (L.Final, eval(L.Final, +, 2));
Constant Folding (CFO): Replace mathematical expressions involving constants with their
equivalent value.
DECLARATION
PRECONDITION
Code_Pattern Find a constant expression
any const
const AND S i .opcode != assign;
checks
ACTION Fold the constants into an expression
modify
modify (S i .opcode, assign);
Copy Propagation (CPP): Replace the copy of a variable with the original.
DECLARATION
PRECONDITION
Code_Pattern find a copy statement
any S i
Depend all uses do not have other defs along the path
all
no
no
ACTION propagate and delete the copy
modify (operand (S j , pos), S i .opr 2 );
delete (S i );
Loop Circulation (CRC):Interchange perfectly nested loops (more than two)
DECLARATION
PRECONDITION
Code_Pattern Find Tightly nested loops
any (L 1 ,
Depend Ensure perfect nesting, no flow_dep with <,>
no
no
ACTION Interchange the loops
move
move
Common Sub-Expression Elimination (CSE): Replace duplicate expressions so that calculations
are perfomed only once.
DECLARATION
PRECONDITION
Code_Pattern Find binary operation
any S n
Depend Find common sub-expression
no
all
ACTION
add
modify (S n , (assign, S n .opr 1 , temp)
modify
Dead Code Elimination (DCE): Remove statements that define values for variables that are not
used.
DECLARATION
PRECONDITION
Code_Pattern find statement assigning variable, value or expression
any S i
Depend statement may not be used
no
ACTION delete the dead code
delete (S i );
Loop Fusion Combine loops with the same headers.
DECLARATION
PRECONDITION
Code_Pattern find adjacent loops with equivalent Heads
any L 1 ,
Depend no dependence with backward direction first; no def reaching prior to loops
no
no
ACTION Fuse the loops
modify
modify
delete (L 1 .End);
delete (L 2 .Head);
Invariant Code Motion (ICM): Remove statements from within loops where the values computed
do not change.
DECLARATION
PRECONDITION
Code_Pattern any loop
any L;
Depend any statement without dependence within the loop
any S k : mem (S k , L) AND mem (S m , L),
ACTION move statement to within header
move (S k , L.Start.Prev);
Loop Unrolling (LUR): Duplicate the body of a loop.
DECLARATION
PRECONDITION
Code_Pattern any loop iterated at least once
any const AND type (L 1 .Final) == const
checks
ACTION unroll one iteration, update original loop's Initial
modify
modify (L 1 .Initial, eval(L 1 .Initial, +, 1));
delete (L 2 .End);
delete (L 2 .Head.Label);
Parallelization (PAR): Modify loop type for parallelization.
DECLARATION
PRECONDITION
Code_Pattern
any
Depend
no
ACTION
modify (L 1 .opcode, PAR);
Mining (SMI): Modify loop to utilize vector architecture.
DECLARATION
PRECONDITION
Code_Pattern
any L: L.Final - L.Initial > SZ;
Depend
ACTION
copy (L.Head, L.Head.Prev, L 2 .Head);
modify (L 2 .Lcv, temp(T));
modify (L 2 .step, SZ);
modify (L 1 .Initial, T);
modify
copy (L.End, L.End, L 2 .End);
Loop Unswitching (UNS): Modify a loop that contains an IF to an IF that contains a loop.
DECLARATION
PRECONDITION
Code_Pattern
any L;
Depend
any
Find the Else
any S k : mem (S k , L) AND NOT ctrl_dep(S i , S k );
ACTION
copy (L.Head, S k , L 2 .Head);
copy (L.End, L.End.Prev.Prev, L 2 .End);
modify (L 2 .End, address(L 2 .Head));
move (L.Head, S i );
move (L.End, S k .Prev);
--R
"Generation of Efficient Interprocedural Analyzers with PAG,"
Faires, in Numerical Analysis
"Global Code Motion Global Value Numbering,"
"Automatic Generation of Peephole Transfor- mations,"
"A Flexible Architecture for Building Data Flow Analyzers,"
"Automatic Generation of Fast Optimizing Code Generators,"
"Automatic Generation of Machine Specific Code Transformer,"
GNU C Compiler Manual (V.
"Peep - An Architectural Description Driven Peephole Transformer,"
"Advanced Compiler Transformations for Supercom- puters,"
"A General Framework for Iteration-Reordering Loop Transformations,"
Stanford SUIF Compiler Group.
"Sharlit - A tool for building transformers,"
"SPARE: A Development Environment for Program Analysis Algorithms,"
"Techniques for Integrating Parallelizing Transformations and Compiler Based Scheduling Methods,"
"An Approach to Ordering Optimizing Transforma- tions,"
"Investigation of Properties of Code Transformations,"
"The Design and Implementation of Genesis,"
"Automatic Generation of Global Optimizers,"
Tiny: A Loop Restructuring Research Tool
in High Performance Compilers for Parallel Computing
--TR
Advanced compiler optimizations for supercomputers
Automatic generation of fast optimizing code generators
An approach to ordering optimizing transformations
Automatic generation of global optimizers
SharlitMYAMPERSANDmdash;a tool for building optimizers
A general framework for iteration-reordering loop transformations
Techniques for integrating parallelizing transformations and compiler-based scheduling methods
The design and implementation of Genesis
Global code motion/global value numbering
A flexible architecture for building data flow analyzers
Peep
Automatic generation of peephole optimizations
Automatic generation of machine specific code optimizers
Generation of Efficient Interprocedural Analyzers with PAG
--CTR
Prasad A. Kulkarni , David B. Whalley , Gary S. Tyson , Jack W. Davidson, In search of near-optimal optimization phase orderings, ACM SIGPLAN Notices, v.41 n.7, July 2006
Spyridon Triantafyllis , Manish Vachharajani , Neil Vachharajani , David I. August, Compiler optimization-space exploration, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Prasad A. Kulkarni , David B. Whalley , Gary S. Tyson , Jack W. Davidson, Exhaustive Optimization Phase Order Space Exploration, Proceedings of the International Symposium on Code Generation and Optimization, p.306-318, March 26-29, 2006
Prasad A. Kulkarni , David B. Whalley , Gary S. Tyson, Evaluating Heuristic Optimization Phase Order Search Algorithms, Proceedings of the International Symposium on Code Generation and Optimization, p.157-169, March 11-14, 2007
Mathieu Verbaere , Arnaud Payement , Oege de Moor, Scripting refactorings with JunGL, Companion to the 21st ACM SIGPLAN conference on Object-oriented programming systems, languages, and applications, October 22-26, 2006, Portland, Oregon, USA
M. Haneda , P. M. W. Knijnenburg , H. A. G. Wijshoff, Generating new general compiler optimization settings, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
M. Haneda , P. M. W. Knijnenburg , H. A. G. Wijshoff, Optimizing general purpose compiler optimization, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Prasad Kulkarni , Stephen Hines , Jason Hiser , David Whalley , Jack Davidson , Douglas Jones, Fast searches for effective optimization phase sequences, ACM SIGPLAN Notices, v.39 n.6, May 2004
Stephen Hines , Prasad Kulkarni , David Whalley , Jack Davidson, Using de-optimization to re-optimize code, Proceedings of the 5th ACM international conference on Embedded software, September 18-22, 2005, Jersey City, NJ, USA
Min Zhao , Bruce Childers , Mary Lou Soffa, Predicting the impact of optimizations for embedded systems, ACM SIGPLAN Notices, v.38 n.7, July
Prasad Kulkarni , Wankang Zhao , Hwashin Moon , Kyunghwan Cho , David Whalley , Jack Davidson , Mark Bailey , Yunheung Paek , Kyle Gallivan, Finding effective optimization phase sequences, ACM SIGPLAN Notices, v.38 n.7, July
Keith D. Cooper , Alexander Grosul , Timothy J. Harvey , Steve Reeves , Devika Subramanian , Linda Torczon , Todd Waterman, Exploring the structure of the space of compilation sequences using randomized search algorithms, The Journal of Supercomputing, v.36 n.2, p.135-151, May 2006
Prasad A. Kulkarni , Stephen R. Hines , David B. Whalley , Jason D. Hiser , Jack W. Davidson , Douglas L. Jones, Fast and efficient searches for effective optimization-phase sequences, ACM Transactions on Architecture and Code Optimization (TACO), v.2 n.2, p.165-198, June 2005
On the decidability of phase ordering problem in optimizing compilation, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Keith D. Cooper , Alexander Grosul , Timothy J. Harvey , Steven Reeves , Devika Subramanian , Linda Torczon , Todd Waterman, ACME: adaptive compilation made efficient, ACM SIGPLAN Notices, v.40 n.7, July 2005
Prasad Kulkarni , Wankang Zhao , Stephen Hines , David Whalley , Xin Yuan , Robert van Engelen , Kyle Gallivan , Jason Hiser , Jack Davidson , Baosheng Cai , Mark Bailey , Hwashin Moon , Kyunghwan Cho , Yunheung Paek, VISTA: VPO interactive system for tuning applications, ACM Transactions on Embedded Computing Systems (TECS), v.5 n.4, p.819-863, November 2006
Mike Jochen , Anteneh Addis Anteneh , Lori L. Pollock , Lisa M. Marvel, Enabling control over adaptive program transformation for dynamically evolving mobile software validation, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Stephen Drape , Oege de Moor , Ganesh Sittampalam, Transforming the .NET intermediate language using path logic programming, Proceedings of the 4th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.133-144, October 06-08, 2002, Pittsburgh, PA, USA
L. Almagor , Keith D. Cooper , Alexander Grosul , Timothy J. Harvey , Steven W. Reeves , Devika Subramanian , Linda Torczon , Todd Waterman, Finding effective compilation sequences, ACM SIGPLAN Notices, v.39 n.7, July 2004
Sorin Lerner , David Grove , Craig Chambers, Composing dataflow analyses and transformations, ACM SIGPLAN Notices, v.37 n.1, p.270-282, Jan. 2002
Min Zhao , Bruce R. Childers , Mary Lou Soffa, A Model-Based Framework: An Approach for Profit-Driven Optimization, Proceedings of the international symposium on Code generation and optimization, p.317-327, March 20-23, 2005
Sorin Lerner , Todd Millstein , Erika Rice , Craig Chambers, Automated soundness proofs for dataflow analyses and transformations via local rules, ACM SIGPLAN Notices, v.40 n.1, p.364-377, January 2005
Min Zhao , Bruce R. Childers , Mary Lou Soffa, An approach toward profit-driven optimization, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.3, p.231-262, September 2006
Ganesh Sittampalam , Oege de Moor , Ken Friis Larsen, Incremental execution of transformation specifications, ACM SIGPLAN Notices, v.39 n.1, p.26-38, January 2004
Mathieu Verbaere , Ran Ettinger , Oege de Moor, JunGL: a scripting language for refactoring, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
David Lacey , Neil D. Jones , Eric Van Wyk , Carl Christian Frederiksen, Compiler Optimization Correctness by Temporal Logic, Higher-Order and Symbolic Computation, v.17 n.3, p.173-206, September 2004
Oege De Moor , David Lacey , Eric Van Wyk, Universal Regular Path Queries, Higher-Order and Symbolic Computation, v.16 n.1-2, p.15-35, March-June | code-improving transformations;enabling and disabling of optimizations;automatic generation of optimizers;specification of program optimizations;parallelizing transformations |
268184 | The T-Ruby Design System. | This paper describes the T-Ruby system for designing VLSI circuits, starting from formal specifications in which they are described in terms of relational abstractions of their behaviour. The design process involves correctness-preserving transformations based on proved equivalences between relations, together with the addition of constraints. A class of implementable relations is defined. The tool enables such relations to be simulated or translated into a circuit description in VHDL. The design process is illustrated by the derivation of a circuit for 2-dimensional convolution. | Introduction
This paper describes a computer-based system, known as T-Ruby [12], for designing
VLSI circuits starting from a high-level, mathematical specification of their behaviour:
A circuit is described by a binary relation between appropriate, possibly complex domains
of values, and simple relations can be composed into more complex ones by the
use of a variety of combining forms which are higher-order functions.
The basic relations and their combining forms generate an algebra, which defines
equivalences (which may take the form of equalities or conditional equalities) between
relational expressions. In terms of circuits, each such equivalence describes a general
correctness preserving transformation for a whole family of circuits of a particular form.
In the design process, these equivalences are exploited to transform a "specification" in
the form of one Ruby expression to an "implementation" in the form of another Ruby
expression, in a calculation-oriented style [4, 9, 13].
T-Ruby is based on a formalisation of Ruby, originally introduced by Jones and
Sheeran [3], as a language of functions and relations, which we refer to as the T-Ruby
language. The purpose of the paper is to demonstrate how such a general language
can be used to bridge the gap between a purely mathematical specification and the
implementable circuit. The design of a circuit for 2-dimensional convolution is used to
illustrate some of the features of the method, in particular that the step from a given
mathematical specification to the initial Ruby description is small and obvious, and
that the method allows us to derive generic circuits where the choice of details can be
postponed until the final actual synthesis.
The T-Ruby system enables the user to perform the desired transformations in the
course of a design, to simulate the behaviour of the resulting relation and to translate
the final Ruby description of the relation into a VHDL description of the corresponding
circuit for subsequent synthesis by a high-level synthesis tool. The transformational
style of design ensures the correctness of the final circuit with respect to the initial
specification, assuming that the equivalences used are correct. Proofs of correctness
are performed with the help of a separate theorem prover, which has a simple interface
to T-Ruby, so that proof burdens can be passed to the prover and proved equivalences
passed back for inclusion in T-Ruby's database.
The division of the system into the main T-Ruby-system, a theorem prover and a
VHDL translator has followed a "divide and conquer" philosophy. Theorem proving
can be very tedious and often needs specialists. In our system the designer can use the
proved transformation rules in the computationally relatively cheap T-Ruby system,
leaving proofs of specific rules and conditions to the theorem prover. When a certain
level of concretisaion is reached efficient tools already exist to synthesise circuits.
Therefore we have chosen to translate our relational descriptions into VHDL.
Ruby
The work described in this paper is based on the so-called Pure Ruby subset of Ruby,
as introduced by Rossen [10]. This makes use of the observation that a very large
class of the relations which are useful for describing VLSI circuits can be expressed
in terms of four basic elements: two relations and two combining forms. These are
usually defined in terms of synchronous streams of data as shown in Figure 1. In the
Figure
1: The basic elements of Pure Ruby
figure, the type sig(T ) is the type of streams of values of type T , usually represented
as a function of type Z ! T , where we identify Z with the time. The notation aRb
means that a is related to b by R, and is synonymous with (a; b) 2 R.
The relation spread f is the lifting to streams of the pointwise relation R of type
ff - fi, whose characteristic function is f (of type ff such that (f a b) is
true R. The type of spread f is then sig(ff) - sig(fi), the type of relations
between streams of type ff and streams of type fi. For notational convenience, and to
stress the idea that it describes the lifting to streams of a pointwise relation of type
ff - fi, this type will be denoted ff
sig
Thus spread f , for suitable f , describes (any) synchronously clocked combinational
circuit, while the relation D - the so-called delay element - describes the basic sequential
circuit. F ; G (the backward relational composition of F with G) describes the serial
composition of the circuit described by F with that described by G . If F is of type
ff
sig
- fl and G is of type fl
sig
fi, then this is of type ff
sig
- fi. Finally, [F ; G ] (the relational
product of F and G) describes the parallel composition of F and G . For F of type
sig
sig
this is of type (ff 1 \Theta ff 2 ) sig
The types of
the relations describe the types of the signals passing through the interface between
the circuit and its environment. However, it is important to note that the relational
description does not specify the direction in which data passes through the interface.
"Input" and "output" can be mixed in both the domain and the range.
R
ff fi
(a)
(b)
(c)
F
G
(d)
Figure
2: Graphical interpretations of:
A feature of Ruby is that relations and combinators not only have an interpretation
in terms of circuit elements, but also have a natural graphical interpretation,
corresponding to an abstract floorplan for the circuits which they describe. The conventional
graphical interpretation of spread (or, in fact, of any other circuit whose
internal details we do not wish to show) is as a labelled rectangular box. The components
of the domain and range are drawn as wire stubs, whose number reflects the
types of the relations in an obvious manner: a simple type gives a single stub, a pair
type two and so on. The components of the domain are drawn up the left hand side
and the components of the range up the right.
The conventional graphical interpretation for D is as a D-shaped figure, with the
domain up the flat side and the range up the rounded side, while F ; G is drawn with
the range of F "plugged into" the domain of G , and [F with the two circuits F
and G in parallel (unconnected). These conventions are illustrated in Figure 2. For
further details, see [3].
3 The T-Ruby Language
In T-Ruby all circuits and combinators are defined in terms of the four Pure Ruby
elements using a syntax in the style of the typed lambda calculus. Definitions of
some circuits and combinators with their types are given in Figure 3. ff, fi and so on
denote type variables and can thus stand for any type. The first five definitions are
of non-parameterised stream relations, which correspond to circuits. +, defined using
the spread element applied to a function which evaluates to true when z equals the
sum of x and y , pointwise relates two integers to their sum. ' is the (polymorphic)
identity relation, dub pointwise relates a value to a pair of copies of that value and
reorg pointwise relates two ways of grouping three values into pairs. These all describe
combinational circuits; all except just describe patterns of wiring, and are known as
wiring relations. The fifth, SUMspec, describes a simple sequential circuit : an adding
machine with an accumulator register.
The remaining definitions are examples of combinators, which always have one
or more parameters, typically describing the circuits to be combined. Applying a
combinator to suitable arguments gives a circuit. Thus (Fst R) is the circuit described
by [R; '], R l S (where the combinator l is written as an infix operator) is the circuit
where R is below S and the second component in the domain of R is connected to the
first component of the range of S . In the definition of l (and elsewhere), R -1 denotes
the inverse relation to R. The graphical interpretations of Fst and l are shown in
Figure
4.
The dialect of Ruby used in the T-Ruby system is essentially that given by Rossen
in [8]. This differs from the standard version of Ruby given by Jones and Sheeran in [3]
in that "repetitive" combinators and wiring relations are parameterised in the number
of repetitions. This is reflected in the type system which includes dependent product
types [5], a generalisation of normal function types. This enable us explicitly to express
the size of repetive structures in the type system. For example, the combinator map
(which "maps" a relation over all elements in a list of streams) has the polymorphic
dependent type:
where nlist[n]T is the type of lists of exactly n elements of type T . Thus map is a
function which takes an integer, n, and a relation of type ff
sig
- fi as arguments and
int \Theta int sig
int
': ff
sig
dub: ff
sig
dub 4
reorg: ((ff \Theta fi) \Theta fl)
sig
(int \Theta control) sig
(bool \Theta int)
loop4 (Fst (Snd D) ; ALU a ; (Snd dub))
Fst
sig
Fst 4
sig
l: (ff \Theta fi)
sig
sig
sig
l 4
Fst (R)
sig
sig
sig
sig
sig
else Fst (apr n-1
sig
sig
sig
Figure
3: Examples of circuit and combinator definitions in T-Ruby.
Fst (R) R l S map 3 R tri 3 R colf 3 R rdrf 3 R
R
R
R
R
R
R
Figure
4: Graphical interpretation of some combinators
returns a relation whose type, nlist[n]ff sig
- nlist[n]fi, is dependent on n, the so-called
\Pi -bound variable. A full description of the T-Ruby type system can be found in [11].
The relation apr n , used in the definition of map, pointwise relates an n-list of values
and a single value to the (n 1)-list where the single value is appended "on the right"
of the n-list 1 .
The combinator mapf is similar to map but the second parameter is a function
from integers to relations, so that the relation used can depend on its position in the
structure. creates a triangular circuit structure, colf a column structure where each
relation is parametrised in its position in the column. Finally rdrf , called "reduce
right", is a kind of column structure. It has its name from functional programming
and will, if for example used with the relation + as argument, calculate the sum of a
list of integers. The graphical interpretations of some of these repetitive combinators
are shown in Figure 4.
Note that the definitions are all given in a point-free notation, reflecting the fact
that they are all expressed in terms of the elements of Pure Ruby. It is easy to show
that they are equivalent to the expected definitions using data values; for example,
However, defining circuits in terms of Pure Ruby elements offers several advantages: it
greatly simplifies the definition and use of general rewrite rules; it simplifies reasoning
about circuits in a theorem prover; and it eases the task of translating the language
into a more traditional VLSI specification language such as VHDL.
4 The Transformational Phase of T-Ruby Design
The design process in T-Ruby involves three main activities, reflecting the overall design
of the system: (1) Transformation, (2) Proof and (3) Translation to VHDL. In
this section we consider the first phase, which involves transforming an initial specification
by rewriting, possibly with the addition of typing or timing constraints, so as
to approach an implementable design described as a Ruby relation.
1 Note that the size argument n here, as elsewhere, is written as a subscript to improve readibility.
4.1 Rewriting
Rewriting is an essential feature of the calculational style of design which is used in
Ruby. The T-Ruby system allows the user to rewrite Ruby expressions according to
pre-defined rewrite rules. Rewriting takes place in an interactive manner directed by
the user, using basic rewrite functions, known as tactics, which can be combined by
the use of higher order functions known as tacticals. This style of system is often
called a transformation system to distinguish it from a conventional rewrite system.
T-Ruby is implemented in the functional programming language Standard ML (SML),
which offers an interactive user environment, and the tactics and tacticals are all SML
functions, applied in this environment.
In the T-Ruby system, a rewrite rule is an expression with the form of an equality
or an implication between two equalities, with explicit, typed universal quantification
over term variables and in most cases implicit universal quantification over types via
the use of type variables. Apart from this there are no restrictions on the forms of the
rules which may be used. In practice, however, the commonly used rules are equalities
between relational expressions, corresponding to equivalences between circuits,
which can be used to manipulate a circuit description in Ruby to another, equivalent
form. Rules for manipulating integer or Boolean expressions could, of course, also be
introduced, but most such manipulations are performed automatically by a built-in
expression simplifier based on traditional rewriting to a normal form.
Some examples of rules can be seen in Figure 5. The first rules express simple facts
about the combinators, such as the commutativity of Fst and Snd (fstsndcomm), the
fact that the inverse of a serial composition is the backward composition of the inverses
(inversecomp), and the distributivity of Fst over serial composition (fstcompdist). The
fourth rule, maptricomm, is an example of a conditional rule: the precondition that R
and S commute over serial composition must be fulfilled in order for tri n R and map n S
to commute. Similarly, forkmap states that if R is a functional relation then a single
copy on the domain side of an n-way fork is equivalent to n copies on the range side
of the fork. Finally, rules such as retimecol, are used in Ruby synthesis to express
timing features, such as the input-output equivalence of a circuit to a systolic version
of the same circuit. Note that since all these rules contain universal quantifications
over relations of particular types, they essentially express general properties of whole
families of circuits.
In the T-Ruby system, the directed rules used for rewriting come from three sources.
They may be explicit rewrite rule definitions, implicit definitions derived from circuit or
combinator definitions (which permit the named circuit or combinator to be replaced
by its definition or vice-versa), or lemmata derived from previous rewrite processes
which established the equality of two expressions, say t and t 0 .
In T-Ruby, the correctness of the explicit rules is proved by the use of a tool [7]
based on the Isabelle theorem prover [6], using an axiomatisation of Ruby within ZF
set theory. To make life easier for the user, conjectured rewrite rules can, however,
be entered without having been proved. When rewriting is finished, all such unproved
rewrite rules are printed out. Together with any instantiated conditions from the
conditional rules, they form a proof obligation which the user must transfer to the
theorem prover, in order to ensure the soundness of the rewriting process.
fstsndcomm=
sig
-
sig
sig
-
sig
(R
fstcompdist 4
sig
-
sig
Fst (R
sig
-
sig
(R
sig
(R
retimecol=
sig
(col
Figure
5: Rewrite rules in T-Ruby.
4.2 Constraints
The transformation process in the T-Ruby system primarily involves rewriting expressions
as described above. However, rewriting can only produce relations which are
exactly equivalent to the original, abstract specification. These relations are often too
large and have to be restricted to obtain an implementable circuit. As a trivial exam-
ple, the relation described above is defined as being of type (int \Theta int) sig
- int. For
implementation purposes we want to restrict the integers to values representable by a
finite number of bits. From a mathematical point of view this means restricting our
relation to the subtype given by:
- int n+1
where int n is a subtype of integers representable by n bits. In T-Ruby we describe this
subtyping by adding relational constraints [13] to the expression. The initial specifi-
Bint n
int n
O
n-bit
n-bit
(n+1)-bit
int n+1
int
int
int
int n
int n+1
(b)
(a)
id n+1
int n
(c)
Bits
Figure
Adding constraints to a relation
cation of + is depicted in Figure 6(a). We narrow the type of + by adding relational
constraints to the domain and the range, in this case instantantions of id n , the identity
relation for integers representable by n bits. This can be defined by:
id n= (Bits n ; Bits n
where Bits n relates an integer to a list of n bits which represent it. The constrained
relation is:
as shown in Figure 6(b). The definition of id n can then be expanded, giving:
[(Bits
as shown in Figure 6(c), and the relations Bits n can be manipulated into the original
relation by using the general rewrite rules.
Another style of constraint is to add delay elements, D, to the domain or range
of the relation. Since Ruby relations relate streams of data, where each element in
the stream corresponds to a specific time instant, this changes the timing properties
of the circuit. As a simple example, the relation + defined in Figure 3 describes a
purely combinational circuit. Adding n delay elements on the range side would give a
specification of an adder with a total delay of n time units. Now we can use the same
general relational rewrite rules to "push" the delay elements into the combinational
part, thus obtaining a description of an adder with the same external timing properties,
but with a different internal arrangement of the registers. This style of manipulation
is illustrated in more detail by the convolution example in the following section. Note
that the same relational framework is used to describe and manipulate both type and
timing constraints.
An interesting variation is to mix the two methods above. For example, by adding a
relational specification of a bit-serial to integer converter as a constraint on the domain
side of + and its inverse on the range side, we obtain a specification of a bit-serial adder
and can manipulate it to get an implementable circuit as above.
Finally, the specification can be constrained by instantiation of free type or term
variables. Free term variables are typically used to describe otherwise unspecified
circuit elements or to give the size of regular structures in a generic manner. By
instantiation we obtain a description of a more specialised circuit with a particular
circuit element or particular dimensions.
In general the transformation process starts from a relational specification, spec, of
a circuit, at some suitably high level of abstraction. spec is then rewritten by a number
of equality rewrites in order to reach a more implementable description. During the
rewrite process the relation can be narrowed by adding relational constraints. The
process can be illustrated by a series of transformations:
where the primes denote the added constraints. The original specification is changed
accordingly from spec to spec 00:::0 , reflecting the addition of the constraints and ensuring
equality between impl and the final constrained specification. From a logical point
of view [15], the constraints can be regarded as the assumptions under which the
implementation fulfills the original specification:
constraints ' (impl , spec)
4.3 An Example: 2-dimensional Convolution
As an example of the tranformation process, we present part of the design of a VLSI
circuit for 2-dimensional discrete convolution. The mathematical definition is that from
a known as the convolution kernel, and a stream of values
a, a new stream of values c should be evaluated, such that:
+r
+r
The intuition behind this is that the stream a represents a sequence of rows of length
w , and that each value in c is a weighted sum over the corresponding value in the
a-stream and its "neighbours" out to distance \Sigmar in two dimensions, using the weights
given by the matrix K. This is commonly used in image processing, where a is a stream
of pixel values scanned row-wise from a sequence of images, and K describes some kind
of smoothing or weighting function. Note that for each i , the summation over j is
equal to the 1-dimensional convolution of a with the i 'th row of K with a time offset
of w 1-dimensional convolution is defined by:
+r
4.3.1 Formulating the problem in Ruby
The first step in the design process is to formulate the mathematical definitions in
Ruby. Following the style of design used for a correlator in [3], we now divide the
relation between a and c into a combinational part, which relates c-values at a given
time to a 0 -values at the same time (for convenience we let the summation run from 1
applying the substitution i new
and a temporal part which relates the a 0 values at time t to the original a-values:
a 0
The temporal part, the matrix a 0
, can be further split into parts which can easily be
specified directly in Ruby. First we for a given i find a relation which relates b i to a
1)-list of a 0
1. An offset dependent on the position j , such that a 0
which in
Ruby can be specified by stating that (a 00 ; a 0 ) are related by (tri 2r+1 D
2. A (2r 1)-way fork, such that a 00
specified by (a 000 ; a 00
3. A fixed offset, such that a 000
specified by (b
4. Assembling 1-3 we get:
(b 0
Next we find a relation relating a to a (2r + 1)-list of b i 's:
1. An offset dependent on position i , such that b 0
specified by
(b
2. A (2r 1)-way fork, such that b 00
specified by (b
3. Another fixed offset, such that b 000 specified by (a; b 000
4. Assembling 1-3 we get:
(a; [b 0
It is convenient to rewrite the two relations above (the two (4)) as follows:
(b 0
butterfly 2r+1 D) (5)
(a; [b 0
butterfly 2r+1 D w
where the combinator butterfly is defined by:
sig
(app n+1;n
Fst (irt n+1 R
The combinational part of the convolution relation is easily expressed in Ruby in
terms of a combinator Q , of type int ((int \Theta int) sig
- int), such that (Q
relates (a; x ) to (K ij a + x ), which expresses the convolution kernel as a function of
position within the matrix K. If we then define c i
(t), it is easy
to demonstrate that, for all t and arbitrary x , ([a 0
(t)]; x (t)) is related
to c i (t) by the Ruby relation (rdrf 2r+1 (Q i )), where rdrf defined in Figure 3.
Combining this with the temporal relations given in definitions 5 and 6 we find that
the entire 2-dimensional convolution relation CR 2 , which relates (a; x ) and
a given w , r , x and Q can be expressed in terms of the one-dimensional convolution
relation (CR 1 i ), which relates (b 0
((int \Theta int) sig
butterfly r D) ; rdrf 2r+1 (Q i
(int \Theta int) sig
int
CR 2= Fst (fork butterfly r D w
CR 1 corresponds to the inner summation over j in the specification. The graphical
interpretation of CR 2 for is shown on the left in Figure 7, and the interpretation
of (CR on the right. The butterflies contain increasing numbers of delay
elements, D, above the mid-line and increasing numbers of "anti-delay" elements, D -1
below the mid-line. As follows from the definitions, the small butterflies use single delay
elements, corresponding to the time difference between consecutive elements in the data
stream, while the large butterflies use groups of w delay elements, corresponding to
the time difference between consecutive lines in the data stream.
To define these relations in T-Ruby, it is convenient to parameterise them, so that
they become combinators dependent on r , w and Q . The final definitions are:
(int \Theta int) sig
(Fst (fork butterfly r D) ; rdrf 2r+1 Q)
(int \Theta int) sig
(Fst (fork butterfly r (D w
With these definitions, the actual circuit for 2-dimensional convolution is described by
the relation (conv2 r w Q) for suitable values of r , w and Q .
Figure
7: Two dimensional convolution for
4.3.2 Transformation to an implementable relation
Unfortunately, the relation given above does not describe a physically implementable
circuit, if we assume (as we implicitly have done until now) that the inputs appear in
the domain of the relation (as x and a) and the outputs in the range (as c). This is
because of the "anti-delays", D -1 , in the butterflies. So instead of trying to implement
the relation (conv2 r w Q) as it stands, we implement a retimed version of it, formed
by adding a constraint on the domain side which delays all the input signals:
Fst (D r ; (D w
This will result in the anti-delays being cancelled out, as the delay elements in the
constraint are moved "inwards" into the original relation. The resulting circuit will, of
course, produce its outputs r units later than the original circuit, but
this is the best we can achieve in the physical world we live in!
From here on we use a series of rewrite rules to manipulate the relation into a
more obviously implementable form. The output from the T-Ruby system during this
derivation is shown, in an annotated and somewhat abbreviated form, in Figure 8. In
the concrete syntax produced by the T-Ruby prettyprinter, free variables are preceded
by a %-sign, repeated composition R n is denoted by R-n, and relational inverse R -1 by
R~, while "x:t.b denotes a -expression variable x of type t and
body b. The derivation finishes with the relational expression:
Fst (fork 2r+1
This is a generic description of a convolution circuit, expressed in terms of three free
variables: r and w , corresponding respectively to the kernel size for the convolution
and the line size for the 2-dimensional array of points to be convoluted, and Q , which
gives the kernel function.
To obtain a description of a particular concrete circuit, we can then use the
Ruby system's facilities for instantiating such free variables to particular values. For
definition of conv2.g
(Fst (D-%r;D-%w-%r));((Fst ((fork (2*%r+1));(butterfly %r D-%w)));
2. fstcompdist (used from right to left).g
(Fst ((D-%r;D-%w-%r);((fork (2*%r+1));(butterfly %r D-%w))));
use rule forkmap.g
(Fst (((fork (2*%r+1));(map (2*%r+1) (D-%r;D-%w-%r)));(butterfly %r D-%w)));
butterfly
(Fst ((fork (2*%r+1));((map (2*%r+1) D-%r);(tri (2*%r+1) D-%w))));
fRule maptricomm, and then rule fstcompdist.g
((Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));(Fst (map (2*%r+1) D-%r))));
(Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));
"k:int.(((Fst D-%r); "i:int.(conv1 %r (%Q i)) k))))
definition of conv1.g
(Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));
"k:int.(((Fst D-%r);((Fst ((fork (2*%r+1));(butterfly %r D)));
fNow use a similar procedure to the above to remove the remaining butterfly.g
(Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));
"k:int.((((Fst (fork (2*%r+1)));(Fst (tri (2*%r+1) D)));
using definition of Fst , then use [tri n D;
(Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));
"k:int.((((Fst (fork (2*%r+1)));((Snd D~-(2*%r+1));
(Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));
"k:int.(((Fst (fork (2*%r+1)));((Snd D~-(2*%r+1));
another constraint on the domain side: Snd (D w
(Snd D-%w-(2*%r+1));((Fst (fork (2*%r+1)));((Fst (tri (2*%r+1) D-%w));
"k:int.(((Fst (fork (2*%r+1)));((Snd D~-(2*%r+1));
fRule fstsndcomm, and Fst R ;
((Fst (fork (2*%r+1)));[(tri (2*%r+1) D-%w),D-%w-(2*%r+1)]);
"k:int.((((Fst (fork (2*%r+1)));(Snd D~-(2*%r+1)));
(Fst (fork (2*%r+1)));
(rdrf (2*%r+1) "k:int.(((Snd D-%w);((Fst (fork (2*%r+1)));((Snd D~-(2*%r+1));
(Fst (fork (2*%r+1)));
"k:int.((((Fst (fork (2*%r+1)));(Snd (D-%w;D~-(2*%r+1))));
\Gamman .g
(Fst (fork (2*%r+1)));
"k:int.((((Fst (fork (2*%r+1)));(Snd D-(%w -(2*%r+1))));
Figure
8: Derivation of the 2-dimensional convolution relation
example, we might instantiate r to 2, w to 64 and Q to -
acc 4
describes a multiply-and-add circuit with multiplication factor (i
and the kernel element described by (Q use this factor as the weight in
accumulating the weighted sum.
After suitable reduction of the integer expressions, this would give us the relational
description:
Fst (fork 5
with no free variables. The graphical interpretation of this final version of the circuit is
shown in Figure 9. As can be seen in the figure, the circuit is semi-systolic, with a latchc
a
z
s
Figure
9: Semi-systolic version of two dimensional convolution for 2.
The left-hand structure depicts the entire circuit.
The basic building element shown on the right corresponds to the
relation Snd D ; (Q k p) with Q instantiated as described in the
text.
The middle structure depicts Snd D
Only 3 of the 59 delay elements in D 59 are shown.
Arrows in the figure indicate the input/output partitioning determined
by the causality analysis.
(described by a delay element, D) associated with each combinational element, but with
a global distribution of the input stream a to all of the combinational elements.
4.4 Selection and Extraction
The rewriting system of T-Ruby includes facilities for selection of subterms from the
target expression by matching against a pattern with free variables. This can be used
to restrict rewriting temporarily to a particular subterm, or, more importantly, for
extraction of part of the target expression for implementation. In the latter case, the
remainder of the target expression gives a context describing a set of implementation
conditions that must be fulfilled for the extracted part to work
Extraction is in many respects the converse of adding relational constraints to
the specification, and the context specifies the same sorts of requirement. Firstly, it
may give representation rules which must be obeyed at the interface to the extracted
subterm, and secondly (if the context contains delay elements, D), it may give timing
requirements for the implementation of the subterm.
For the trivial example of the adder above the extracted part will typically, after
some rewriting, be the circuit inside the dashed box on Figure 6(c). The implementation
conditions in this case express the fact that integers must be represented by n
bits, as specified by Bits n .
5 VLSI Implementation
The relational approach to describing VLSI circuits offers a greater degree of abstraction
than descriptions using functions alone, since the direction of data flow is not
specified. However, real circuits offer particular patterns of data flow, and this means
that the interpretation of a relation may in general be 0, 1 or many different circuits.
In the case of zero circuits, we say the relation is unimplementable. The widest class
of relations which are generally implementable is believed to be the causal relations,
as defined by Hutton [2]. These generalise functional relations in the sense that inputs
are not restricted to the domain nor outputs to the range.
In T-Ruby, causality analysis is performed at the end of the rewriting process, when
the user has extracted the part of the relation which is to be implemented. In most
cases, in fact, the context from which the relation is extracted is non-implementable:
for example, it may specify timing requirements which (if they could be implemented)
would correspond to foreseeing the future.
5.1 Causality analysis
More exactly, a relation is causal if the elements in each tuple of values in the relation
can be partitioned into two classes, such that the first class (the outputs) are functionally
determined by the second class (the inputs), and such that the same partitioning
and functional dependency are used for all tuples in the relation. For example, the
previously defined relation + is causal, in the sense that the three elements
of each tuple of values in the relation can be partitioned as described, in fact in three
different ways:
1. With x and y as inputs and z as output, so that the relation describes an adder .
2. With x and z as inputs and y as output, so that the relation describes a subtractor
.
3. With y and z as inputs and x as output, so that the relation describes another
subtractor .
Note that the relation + -1 is also causal, although it is not functional. Essentially,
causality means that the relation can be viewed under the partitioning as a deterministic
function of its inputs.
In T-Ruby, the relation to be analysed is first expanded, using the definitions of
its component relations, to a form where it is expressed entirely in terms of the four
elements of Pure Ruby and relational inverse. The expanded relation is then analysed
with a simple bottom-up analysis heuristic. For combinational elements described by
spread relations, causality is determined by analysing the body of the spread , which
must have the form of a body part which is:
ffl an equality with a single variable on the left-hand side,
ffl a conjunction of body parts, or
ffl a conditional choice between two body parts.
In each equality, the result of the analysis depends on the form of the right-hand side.
If this is a single variable, no conclusions are drawn, as the equality then just implies
a wire in the abstract floorplan. If the right-hand side is an expression, all values in
it are taken to be inputs, and the left-hand side is taken to be an output. In choices,
all values in the condition are taken to be inputs. If these rules result in conflicts, no
causal partitioning can be found. When there are several possible causal partitionings,
as in the case of +, on the other hand, the rules enable us to choose a unique one.
For delay elements, D, values in the domain are inputs and those in the range
are outputs. Parallel composition preserves causality, and so in fact does inversion,
but serial compositions in general require further analysis, to determine whether the
input/output partitionings for the component relations are compatible with an implementable
(unidirectional) data flow between the components. Essentially, checks are
made as to whether two or more outputs are used to assign a new signal value to the
same wire, whether some wires are not assigned signal values at all or whether there
are loops containing purely combinational components. This additional analysis is exploited
in order to determine the network of the circuit in the form of a netlist with
named wires between active components. At present there is no backtracking, so if the
arbitrary choice of partitioning when there are several possibilities is the "wrong" one,
then it will not be possible to find a complete causal partitioning for the entire circuit.
As an example, let us consider the analysis of parts of the relation for 2-dimensional
convolution. The central element in this is the relation given by acc(p
describes the combinational multiply-and-add circuit for kernel element (p; k ). Using
the definition of acc, and substituting (p reduces to:
spread
The body of the spread has the form of an equality with a single left-hand side, and
thus the causal partitioning will make z an output and m and s both inputs. In this
case, the relation is functional from domain to range, but in general this need not be
so.
Since delay elements, D, can only have inputs on the domain side and outputs
on the range side, the serial composition (Snd D compatible with this
analysis of acc(p as the range of the delay element corresponds to component s
in the domain of acc(p Further analysis proceeds in a similar manner, leading to
the final data flow pattern shown by arrows in Figure 9.
5.2 Translation to VHDL
Since causality analysis gives both the network of the circuit and the direction of data
flow along the individual wires between components, the actual translation to VHDL is
comparatively simple. Each translated "top level" Ruby relation is declared as a single
design unit, incorporating a single entity with a name specified by the user. In rough
terms, each combinational relation C which is not a wiring relation within the expanded
Ruby relation is translated into one or more possibly conditional signal assignments,
where the outputs of C are assigned new values based on the inputs. For example, the
above gives rise to a single concurrent signal assignment
of the form:
where sig z, sig s and sig m are the names of the VHDL signals corresponding to
z , s and m respectively, and W is a constant equal to the value of (p for the
circuit element in question. Since the operators available for use with operands of
integer, Boolean, bit and character types in Ruby are (with one simple exception:
logical implication) a subset of those available in VHDL, this direct style of translation
is problem-free. In a similar manner, any conditional (if-then-else) expressions in the
body of a spread are directly translated into conditional assignment statements, possibly
with extra signal assignments to evaluate a single signal giving the condition.
The VHDL types for the signals involved are derived from the Ruby types used
in the domain and range of C in an obvious way. Thus for the elementary types,
the Ruby type bit is translated to the VHDL type rubybit, bool to rubybool, int to
rubyint, and char to rubychar, where the VHDL definitions of rubybit, rubybool,
rubyint and rubychar are predefined in a package RUBYDEF, which is referred to by
all generated VHDL units. Composed types give rise to groups of signals, generated
by (possibly recursive) flattening of the Ruby type, such that a pair is flattened into
its two components, a list into its n components and so on until elementary types are
reached.
If the Ruby relation refers to elementary types other than these pre-defined ones, a
package declaration containing suitable type definitions is generated by the translator.
For example, if an enumerated type etyp is used, a definition of a VHDL enumerated
type rubyetyp with the same named elements is generated.
Free variables of relational type and all non-combinational relations in the Ruby
relation are translated into instantiations of one or more VHDL components. For
example, a Delay relation, D. of Ruby type t sig
is a simple type, will
be translated into an instantiation of the component dff rubyt, where rubyt is the
type corresponding to t , as above. For composed types, such as pairs and lists,
two or more components, each of the appropriate simple type, are used. Standard
definitions of these components for all standard simple Ruby types are available in a
library. Other components (in particular those generated from free relational variables)
are assumed to be defined by the user.
The final result of translating the fully instantiated 2-dimensional convolution relation
into VHDL is shown in Figure 10. The figure does not show the entire VHDL
code (which of course is very repetitive owing to the regular nature of the circuit),
but illustrates the style. Signal identifiers starting with input and output correspond
generated code. Do not edit.
Compiled 950201, 11:58:28 from Ruby relation:
-%% ((Fst (fork 5));
-%% (rdrf 5 "k:int.
-%% ((((Fst (fork 5));(Snd D-59));
ENTITY conv264 IS
PORT
END conv264;
ARCHITECTURE ruby OF conv264 IS
COMPONENT dff rubyint
PORT
END COMPONENT;
SIGNAL
wire4546,wire4548,wire4550,wire4552,wire4554,wire4556,wire4558,
wire19943,wire20128,wire20273,wire20378,wire20478,wire20596,
wire20746,wire20928,wire21142: rubyint;
BEGIN
- Input assignments: -
output1 != wire1983;
Calculations: -
Registers: -
D3: dff rubyint PORT MAP (wire20128,wire20746,clk);
D4: dff rubyint PORT MAP (wire20273,wire20596,clk);
D5: dff rubyint PORT MAP (wire20378,wire20478,clk);
D7: dff rubyint PORT MAP (wire19730,wire19732,clk);
D319: dff rubyint PORT MAP (wire4546,wire4548,clk);
D320: dff rubyint PORT MAP (wire2540,wire4546,clk);
Figure
10: VHDL translation of the instantiated 2-dimensional convolution circuit.
to the external inputs and outputs mentioned in the formal port clause of the entity,
while names starting with wire identify internal signals. A clock input is generated
if any of the underlying entities are sequential. The assignments marked Calculations
describe the combinational components, and those marked Registers describe the component
instantiations corresponding to the Delay elements. (Instantiations of any other
user-defined components follow in a separate section if required.)
The correctness of the translation relies heavily on two facts:
1. There is a simple mapping between Ruby types and operators and types and
operators which are available in VHDL.
2. Relations are only considered translatable if an (internally consistent) causal
partitioning can be found.
These facts also imply that the VHDL code which is generated can be synthesised
into VLSI. At present, we use the Synopsys VHDL Compiler [14] for performing this
synthesis automatically.
5.3 Other Components of the System
The complete system is illustrated in Figure 11. A similar style of analysis to that
used for generating VHDL code is used for controlling simulation of the behaviour
of the extracted relation. The user must supply a stream of values for the inputs
of the circuit and, if required, initial values for the latches, and the simulation then
uses exactly the same assignments of new values to signals as appear in the VHDL
description. Obviously, only fully instantiated causal relations can be simulated.
SYSTEM
Data flow
Proof
burdens
SILICON
EQUIVALENCES
PROVED
PARSER/PRINTER
RUBY expressions
(internal representation)
RUBY terms
SYSTEM
ANALYSIS
Dynamic behaviour
Figure
11: The complete Ruby Design System.
6 CONCLUSION 19
6 Conclusion
In this paper, we have presented the T-Ruby Design System and outlined a general
design method for VLSI circuits based on transformation of formal specifications using
equality rewriting, constraints and extraction. The simple mathematical basis of the
specification language in terms of functions and relations enables us to prove general
transformation rules, and minimises the step from the mathematical description of the
problem to the initial specification in our system.
The use of the system has been illustrated by the non-trivial example of a circuit for
2-dimensional convolution. This example shows how T-Ruby can be used to describe
complex repetitive structures which are useful in VLSI design, and demonstrates how
the system can be used to derive descriptions of highly generic circuits, from which
concrete circuit descriptions can be obtained by instantiation of free parameters. Circuits
described by so-called causal relations can be implemented and their behaviour
simulated. In the T-Ruby system, a simple mapping from T-Ruby to VHDL for such
relations is used to produce a VHDL description for final synthesis.
The design system basically relies on the existence of a large database of pre-
proved transformation rules. However, during the design process, conjectured rules
can be introduced at any time, and rewrite rules with pre-conditions may be used.
In T-Ruby, proofs of conjectures and pre-conditions can be postponed without any
loss of formality, as the system keeps track of the relevant proof burdens and these
can be transferred later to a separate theorem prover. Our belief is that this "divide
and ocnquer" philosophy helps to make the use of formal methods more feasible for
practical designs.
Acknowledgements
The work described in this paper has been partially supported by the Danish Technical
Research Council.
The authors would like to thank Lars Rossen for many interesting discussions about
constructing tools for Ruby, and Ole Sandum for his work on the design of the Ruby
to VHDL translator.
--R
A framework for defining logics.
Between Functions and Relations in Calculating Programs.
Circuit design in Ruby.
Relations and refinement in circuit design.
A Generic Theorem Prover
A Ruby proof system.
Formal Ruby.
Ruby algebra.
The Ruby framework.
T-Ruby: A tool for handling Ruby expressions.
Transformational rewriting with Ruby.
Constraints, abstraction and verification.
--TR | hardware description languages;relational specification;synchronous circuit design;correctness-preserving transformations |
268186 | Bounded Delay Timing Analysis of a Class of CSP Programs. | We describe an algebraic technique for performing timing analysis of a class of asynchronous circuits described as CSP programs (including Martins probe operator) with the restrictions that there is no OR-causality and that guard selection is either completely free or mutually exclusive. Such a description is transformed into a safe Petri net with interval time delays specified on the places of the net. The timing analysis we perform determines the extreme separation in time between two communication actions of the CSP program for all possible timed executions of the system. We formally define this problem, propose an algorithm for its solution, and demonstrate polynomial running time on a non-trivial parameterized example. Petri nets with 3000 nodes and 10^16 reachable states have been analyzed using these techniques. | Introduction
There has been much work in the past decade on the synthesis of speed-independent (quasi-delay-
insensitive) circuits. What we develop in this paper are basic results that allow designers to reason
about, and thus synthesize, non-speed-independent or timed circuits. Whether designing timed
asynchronous circuits is a good idea can be debated ad infinitum. In any event, designers have
been applying "seat of the pants" techniques to design timed circuits for years. Our work can be
used to verify such designs. Our vision, however, is much more broad, and includes a complete
synthesis methodology for developing robust and high-performance timed designs. A description
of such a methodology is beyond the scope of this paper but is a major motivation to addressing
this difficult timing analysis problem.
An asynchronous circuit is specified using CSP as a set of concurrent processes. This description
is transformed into a safe Petri net which is the input to the timing analysis algorithm. The analysis
determines the extreme case separation in time between two communication actions in the CSP
specification over all timed executions. Determining tight bounds on separation times between
communication actions (system events) provides information which can be used to answer many
different temporal questions. For example, we may wish to know bounds on the cycle period of an
asynchronous component so we can use it to drive the clock signal of a synchronous component.
Similar information can be used to generate worst-case and amortized performance bounds. We
may also perform minimum separation analyses in order to determine if it is feasible to remove
circuitry from a speed-independent implementation [16]. Our algorithm performs these sorts of
analyses and is useful in many contexts and at many levels of abstraction. Separation analyses at
the high-level can be used to help a designer choose among potential designs to perform a given
computation. At a lower level they can be used to determine the correctness of the implementation,
e.g., whether isochronic fork assumptions are valid [12].
Related work in timing analysis and verification of concurrent systems comes from a variety of
different research communities including: real-time systems, VLSI CAD, and operations research.
Timed automata [1] is one of the more powerful models for which automated verification methods
exists. A timed automaton has a number of clocks (timers) whose values can be used in guards of
the transitions of the automaton. Such models have been extensively studied and several algorithms
exist for determining timing properties for timed automata [8, 9]. As in the untimed case, timed
automata suffer from the state explosion problem when constructing the cross product of component
specifications. Furthermore, the verification time is proportional to the product of the maximum
value of the clocks and also proportional to the number of permutations of the clocks.
To improve the run-time complexity, Burch [7] extends trace theory with discrete time but still
uses automata-based methods for verification. This approach also suffers from exponential runtime
in the size of the delay values but avoids the factorial associated with the permutations of the
clocks. Orbits [19] uses convex regions to represent sets of timed states and thus avoids the explicit
enumeration of each individual discrete timed state. Orbits is based on a Petri net model augmented
with timing information. Other Petri net based approaches include Timed Petri nets [18] and Time
Petri nets [3]. In Timed Petri nets a fixed delay is associated with each transitions while Time
Petri nets use a more general model with delay ranges associated with the transitions.
This paper is composed of seven sections. We follow this introduction with a description of
the CSP specification language and its translation to Petri nets. Timed and untimed execution
semantics of Petri nets are introduced in Section 3. The algorithm for performing timing analysis on
Petri nets is describe in Section 4 and 5. Section 6 presents a parameterizable example which is used
to benchmark the performance of the algorithm. Finally, Section 7 summarizes the contributions
of this paper.
Specification
We now describe the specification language and show how to translate a specification into an
intermediate form that is more suitable for timing analysis.
2.1 CSP Programs
We specify computations using CSP (Martin style [14]). To simplify the timing analysis, we restrict
the expressive power of the specification language. First, we exclude disjunctions in the guards
because they correspond to OR-causality, which is known to be difficult [11, 15]. Second, we
require the semantic property that during all untimed executions, either choice between guards
is completely free (all the guards are identically true), or at most one guard evaluates to true.
As we shall see in Section 3.2, this allows the use of all untimed executions in determining the
possible timed behaviors, which is an important simplification to the timing analysis problem.
These restrictions still allow the analysis of a large and interesting class of CSP programs, including
many programs specifying implementations and abstractions of asynchronous control circuits. The
syntax for a restricted CSP program P is shown in Table 1. Figure 1(a) shows a simple CSP
program with three processes.
Table
1: Syntax for the restricted CSP. P is a program, S a statement, C a communication
action, E an expression, B a guard, and T a term. The terminal
symbols x and X represent a variable identifier and a communication
channel, respectively.
2.2 Petri Nets
The CSP specification is translated into a safe Petri net which is the direct input for the timing
analysis. A net N is a tuple (S; are finite, disjoint, nonempty sets of
respectively places and transitions, and F ' (S \Theta T ) [ (T \Theta S) is a flow relation. A Petri net \Sigma
is a pair (N; M 0 ), where N is a net and is the initial marking. See [17] for further
details on the Petri net model. Graphically, a Petri net is represented as a bipartite graph whose
nodes are S and T and whose edges represent the flow relation F . Circles represent places and
straight lines represent transitions. The initial marking is shown with dots (tokens). Figure 1(b)
shows a simple Petri net. For an element x the preset and postset of x are defined as
fy fy
A marking represents the global state of the system. A transition t is enabled at a marking M
if each input place of t is marked, i.e., 8s Firing the enabled transition t at M
produces a new marking M 0 constructed by removing a token from each of the places in the preset
of t and adding a token to each of the places in the postset of t. The transformation of M into
firing t is denoted by M [tiM 0 . We let [Mi denote the set of markings reachable from the
marking M .
2.3 Translation of CSP into Petri Nets
The CSP specification is translated into a safe Petri net. Petri net transitions are used to model
communication synchronizations and places are used to model control choice. The Petri net can be
constructed by syntax directed translation [4]. The mapping amounts to introducing a single token
corresponding to the program counter in each communicating process. A variable x is modeled
by two places, x 0 and x 1 . If x is true, x 1 is marked, otherwise x 0 is marked. After constructing
nets for each process, transitions are combined corresponding to matching communication actions.
Figure
1 shows a simple CSP specification and the corresponding Petri net.
(a)
a
d
c e
f
(b)
Figure
1: (a) CSP specification and (b) the corresponding Petri net \Sigma.
There are two complications in the translation. One is how to consistently label Petri net transitions
corresponding to communication actions that occur different numbers of times in connected
processes. This labeling problem is illustrated in its simplest form by the following CSP program
composed of a divide by two counter connected to a trivial environment:
The X communication in P 1 has to connect up to two X communications in P 2 . We solve the
labeling problem by introducing a separate label for each possible pairing of communication actions
[4]. In the example, we introduce two labels, X (0) and X (1) , and a choice for each of the
possible communications, obtaining the nets shown in Figure 2.
Y
Figure
2: Petri nets for the individual processes in the divide by two counter.
The second complication is the translation of the probe construct, X. If the probed communication
action X is not completed immediately, we split it in two, '
. The first half implements
the guard of the selection statement, and the second half implements the actual communication
action. Figure 3 illustrates this translation.
2.4 Properties of Petri Net
The Petri net obtained from a CSP specification has the following properties:
A
A
Figure
3: The Petri net for the incomplete CSP program
ffl The Petri net is safe, i.e., there is never more than one token at a place:
be a choice place, i.e., jsfflj ? 1. Then s is either extended free choice or
unique choice. The place s is extended free choice if
The place s is unique choice if at most one of the successor transitions ever becomes enabled.
denote the number of transitions in sffl that are enabled at the marking M . A
place s is unique choice if 8M 2 [M 1. The place s 2 in the net in Figure 1(b)
3 Execution Semantics
To represent the set of all (untimed) executions we introduce the notion of a process. Intuitively,
a process is an unfolding of a Petri net that represents one possible (finite) execution of the Petri
net. Processes are used to give timing semantics to a Petri net; for each process we define the set
of legal assignments of time stamps to the transitions of the process.
3.1
A process for the Petri net \Sigma is a net N and a labeling lab
(We subscript S, T , and F to distinguish between the nets of \Sigma and -.) The net N is acyclic and
without choice, i.e., without branched places. N and lab must satisfy appropriate properties such
that - can be interpreted as an execution of \Sigma [5, 21].
Figure
4 shows a process for the Petri net in Figure 1(b). The only true choice in the net is
at the place s 2 where there is a non-deterministic selection of either transition c or d. The process
represents the execution where the first time transition c fires and the next time transition d fires.
We denote all (untimed) executions of a Petri net \Sigma by the set
is a process of \Sigmag :
A safe Petri net has only a finite number of reachable markings. Processes have the property that
any cut of places corresponds to a reachable marking of \Sigma [5, Lemma 2.7]. Therefore, sufficiently
long processes will contain repeated segments of processes. We represent the potentially infinite
e
a
c
f
a
d
Figure
4: A process - for the Petri net \Sigma in Figure 1(b). The places and transitions
in the process have been labeled (using the lab function) with the names
of their corresponding places and transitions in \Sigma.
set of processes \Pi(\Sigma) by a finite graph we call the process automaton. The vertices of the process
automaton correspond to markings of \Sigma and the edges are annotated with segments of processes.
We let v 0 denote the vertex corresponding to the initial marking M 0 . Consider a path p in a process
automaton from vertex u to v, denoted u p
v. Then -(p) is the process obtained by concatenating
the process segments annotated on the edges of p. The process automaton has the property that
pref (-(p))
v is a path in the process automaton
where pref (-) is the set of prefixes (defined on partial orders [21]) of a process -. We can construct
the process automaton without first constructing the reachability graph [6, 10]. If there is no
concurrency in the net, the size of the process automaton is equal to the size of the reachability
graph. However, if there is a high degree of concurrency, the process automaton will be considerably
smaller. Figure 5 shows the process automaton and the associated processes for the Petri net in
Figure
1(b). The process in Figure 4 is constructed from
a
f
d
e
Figure
5: To the left, the process automaton for the Petri net in Figure 1(b). The
three process segments annotated on the edges are shown to the right
(labeled with elements from S \Sigma [ T \Sigma ).
3.2 Timed Execution
To incorporate timing into the Petri net model, we associate delay bounds with each place in the
net. The lower delay bound, d(s) 2 R 6\Gamma , and the upper delay bound, D(s) 2 R 6\Gamma [f1g, where R 6\Gamma
is the set of non-negative real numbers, satisfy 0 - These delay bounds restrict the
possible executions of the Petri net. During a timed execution of the net, when a token is added
to a place s, the earliest it becomes available for a transition in sffl is later and the
latest is D(s) units later. A transition t must fire when there are available tokens at all places in
fflt unless the firing of the transition is disabled by firing another transition. The firing of t itself is
instantaneous.
More formally, a timing assignment for process - , is a function that maps transitions in a
process to time values,
Definition 3.1 Let \Sigma be a safe Petri net and let - be a process of \Sigma. We consider a cut c ' S -
of - and let T enabled ' T \Sigma be the set of transitions enabled at the corresponding marking, M c . For
a timing assignment, - , and a transition t 2 T enabled , the earliest and latest global firing time of t
is given by
and
starttime(b)
denotes the set of elements of S - which are mapped to s by lab. Note that c "
lab \Gamma1 (fflt) is non-empty because t is enabled at marking M c . The function starttime takes a place
returns the time when a token entered the place lab(b), i.e., -(e) if . If there
is no such transition e, we set starttime(b) to 0. The timing assignment - is consistent at cut c if:
and
A timing assignment - of a process is consistent if it is consistent at all place cuts c of the
process. Let \Pi timed (\Sigma) be the set of all timed executions of \Sigma:
timed
\Pi(\Sigma) and there exists a consistent timing assignment for -
The restrictions on the CSP specification in Section 2 were crafted such that the set of untimed
and timed processes of the underlying Petri net are equivalent. This allows us to use the process
automaton to enumerate the possible processes without referring to timing information, and then
perform timing analysis on each process individually. To prove this, we need two lemmas. The first
states that it is always possible to find a timing assignment satisfying (1) in Definition 3.1. The
second lemma states a simple structural property of extended free choice places.
Lemma 3.2 Let \Sigma be a safe Petri net. For any t 2 T \Sigma , earliest(t) - latest(t) for any process of \Sigma
and for any timing assignment (not necessarily consistent).
Proof: From the definition of earliest and latest and the fact that for all place s,
Lemma 3.3 Let \Sigma be a Petri net and let s 2 S \Sigma be an extended free choice place. Then 8t
Proof: By contradiction: assume fflt 1 6= fflt 2 . Then there is a place s 0 in
Assume without loss of generality that s 0 is in fflt 1 and not in fflt 2 . From the definition of s being
an extended free choice place, it follows that ffl. By the premise of the lemma, t 2 is in sffl,
and thus also in s 0 ffl. A simple fact about pre- and post-sets is that if y 2 xffl then x 2 ffly. As t 2 is
in s 0 ffl, s 0 is in fflt 2 , contradicting the assumption. 2
Theorem 3.4 Let \Sigma be a safe Petri net where choice is either extended free choice or unique
choice. Then \Pi timed
Proof: Clearly, \Pi timed (\Sigma) ' \Pi(\Sigma). We show that \Pi(\Sigma) ' \Pi timed (\Sigma), i.e., there exists a consistent
timing assignment for all - 2 \Pi(\Sigma).
We will prove that for all cuts c of - and any b 2 c, constraint (2) is subsumed by constraint (1).
From Lemma 3.2 it follows that for any process there exists a consistent timing assignment.
Observe that lab(bffl) ' lab(b)ffl, and thus if lab(bffl) " T enabled is non-empty then so is lab(b)ffl "
enabled .
Let c ' S - be a cut of - and let b be a place in c. If and (2) is
trivially satisfied. For bffl 6= ;, let e be the one element in bffl (all places b in a process has jbfflj - 1)
and let s be the place in \Sigma corresponding to b, i.e., lab(b). The observation above then states
that lab(e) 2 enabled . Consider two cases:
lab(e) is the only element in sffl " T enabled and (2) to reduces to -(e) - latest(lab(e)).
1: The choice place s is either extended free choice or unique choice:
s is extended free choice: From Lemma 3.3 we have that
from the definition of latest , it follows that 8t As
minimization is idempotent and lab(e) 2 reduces to -(e) -
latest(lab(e)).
s is unique choice: From the definition of unique choice, thus lab(e) is
the only element in sffl " T enabled . Condition (2) again reduces to -(e) - latest(lab(e)).4 Timing Analysis
Having formally defined the timing semantics of the Petri net, we now state the timing analysis
problem and present an algorithm for solving this problem.
4.1 Problem Formulation
Given two transitions from a Petri net \Sigma, t from ; t to 2 T \Sigma , we wish to determine the extreme-case
separation in time between related firings of t from and t to . We let b
\Pi be a set of triples
dst i, where - 2 \Pi(\Sigma) and t src ; t dst are transitions in the process - with lab(t src
lab(t dst to . The set b
\Pi is used to describe all the possible processes where the distinguished
transitions t src and t dst have the appropriate relationship. The timing analysis we perform is: for
all b
\Pi and for all consistent timing assignments - for -, determine the largest ffi and smallest \Delta
such that
The transitions t src and t dst must be related in order for the timing analysis to yield interesting
information. Consider finding the maximum time between consecutive firings of transition a in
Figure
1(b), corresponding to the maximum cycle time of a transition. For this separation t from
and t to are both a. The occurrences of a, the t src and t dst transitions, must be restricted such that
all the elements of b
\Pi have the property that no other transition t between t src and t dst has label
a. For example, one of the elements in b
\Pi is the process in Figure 4 with t src and t dst being the
left-most and right-most transitions labeled with a, respectively.
The relationship between t src and t dst is defined by backward relative indexing by specifying
two numbers fi and fl, and a reference transition, t ref 2 T \Sigma . For a particular -, we find the
corresponding transitions t src and t dst by the following procedure: start at the end of the process
and move backwards looking for a transition t such that When found, we continue
moving backwards, looking for the fith transition t (starting with this is
t src . Simultaneously, we find the flth transition having to ; this is t dst . When both are
found, we include h-; t src ; t dst i in b
\Pi.
The specification of a separation analysis on the Petri net \Sigma thus consists of three transitions,
t from , t to , and t ref , all in T \Sigma , and two constants fi; fl 2 N. We call fi and fl occurrence indices
relative to the transition t ref . We let
\Pi(\Sigma; t from ; fi; t to ; fl; t ref )
denote the set of triples h-; t src ; t dst i where - 2 \Pi(\Sigma) and t src and t dst have the relation described
above.
One communication action in the CSP program may map to many transitions in \Sigma and these
transitions are to be considered equivalent when performing the timing analysis. Instead of specifying
the separation between individual transitions, we specify it between sets of transitions, i.e.,
a separation analysis is specified by two occurrence indices fi and fl, and three sets of transitions
from \Sigma: From, To, and Ref . Our final formulation is:
It is straightforward, given the communication actions, to determine what transitions should
be included in these sets. Sometimes we may also want to consider several CSP communication
actions as equivalent with respect to the separation analysis (we will see an example of this in
Section 6). This can conveniently be achieved by adding the corresponding Petri net transitions to
the appropriate From , To, and Ref sets.
In the sequel, we will only discuss the maximum separation analysis, i.e., find \Delta, because the
separation ffi can be found from a maximum separation analysis of -(t src dst
This is accomplished by computing b
\Pi using reversed r-oles for From and To, and fi and fl: b
4.2 The CTSE Algorithm
Let \Delta( b
-) be the maximum separation between t src and t dst for some particular execution b -:
\Delta( b
is a consistent timing assignment for
The maximum separation over all executions is then given by
We now show how the elements of b
\Pi are constructed to obtain \Delta, and Section 5 describes the
algorithm for computing \Delta( b
-).
The process automaton represents all possible executions. However, whatever topologically
dst in a process cannot influence the maximum separation between these two
transitions. Any portion of a process following t src and t dst can therefore be ignored and all processes
in b
\Pi will end with some terminal process segment that includes the two transitions t src and t dst .
Let -(p) be a process containing t src and t dst for some path p in the process automaton (starting at
We decompose this process into -(p 0 )- T , where p 0 is a path in the process automaton and - T is
the minimal process segment containing t src and t dst . The process segment - T is called a terminal
segment. We let \Pi T (u) be the finite set of process segments such that for any path v 0
; u in the
process automaton and - T 2 \Pi T (u), the process -(p)- T is in b
\Pi.
Figure
6 shows the two terminal
process segments belonging to \Pi T (v 0 ) for the a-to-a separation analysis in our example. For this
example, all processes in b
\Pi can be constructed by -(p)- T , where v 0
e
a
c
a
f
a
d
a
Figure
Two terminal processes (labeled using lab) for the separation analysis
from a to the next a transition in the Petri net in Figure 1(b).
An algorithm for computing \Delta( b -) can be phrased in algebraic terms. For each segment of a
process, there is a corresponding element in the algebra. We use [-] to denote this element for the
process segment -. The algebra allows us to reuse analysis of shorter processes when computing
\Delta( b -) because the operators of the algebra are associative (the details are shown in the next section).
There are two operations in the algebra: "choice", j, and "concatenation", fi.
Our approach to analyzing the infinite set b
\Pi is to enumerate the processes b
- of increasing length
by unfolding the process automaton using a breadth-first traversal. We traverse the automaton
backwards, starting with the terminal segments. An element of the algebra is stored at each node
v in the process automaton. Let [v] k denote the algebraic element stored at node v in the process
automaton after the k th iteration. Initially, [v]
(v)g.
When traversing the process automaton backwards, the elements of the algebra are composed
(using fi) for two paths in series, and combined (using j ) for two paths in parallel. The choice-
operator combines backward paths when they reach the same marking in the process automaton.
This is illustrated below by showing a backward traversal with reconvergence corresponding to the
process automaton in Figure 5 and the two terminal processes in Figure 6:
For this example, [v 0
Whenever the node v 0 is reached in the k th unfolding, [v 0 represents the maximum separation for
all executions represented by that unfolding, denoted \Delta k . This value is maximized with the values
for the previous unfoldings, \Delta kg. From (3) it follows that \Delta -k is a lower
bound on \Delta and that
For a given node v in the process automaton, we can compute an upper bound on all further
unfoldings; this bound is denoted [v] ?k . Let c be a vertex cut of the process automaton. An upper
bound on \Delta for the k th unfolding is \Delta cg. When \Delta ?k is less than or equal to
\Delta -k for some k we can stop further unfolding and report the exact maximum separation
It is possible that the upper and lower bounds do not converge in which case the bounds may still
provide useful information as \Delta is in the range The main loop of the CTSE algorithm
is shown in Figure 7.
Algorithm: CTSE(G)
For each
(v)g at v;
do f
unfold once(G);
until
return
Figure
7: The CTSE algorithm computing \Delta given a process automaton G.
The run-time of the algorithm depends on the size of the representation of the algebraic elements.
The size of an element may be as large as the number of paths between the two nodes related by the
element, i.e., exponential in the number of iterations, k. In practice, pruning drastically reduces
the element size.
Computing \Delta( b
This section describes the algebra used in the CTSE algorithm. This algebra is used to reformulate
an algorithm by McMillan and Dill [15] for determining the maximum separation of two events in
an acyclic graph.
5.1 Algebras
Before presenting the algorithm for computing \Delta( b -) we introduce two algebras. The first is the
(min; +)-algebra (R [ f1g; \Phi 0
The elements 1 and 0 are the identity elements for \Phi 0
The second algebra is denoted by
Each element in F is a function represented
by a set of pairs. The singleton set, fhl; uig, where u is a row-vector of length n, represents the
where m is a column-vector of length n
and\Omega 0 denotes the inner product in the (min; +)-algebra.
In general, the set fhl represents the function
We associate two binary operators with functions: function maximization, f \Phi g, and function
composition,
f\Omega g. It follows from (4) that function maximization is defined as set union: f \Phi
g. Function composition,
g\Omega h, is defined as f(x; m) = h(g(x; m); m). Notice that we use
left-to-right function composition. For
m) and
Function
composition,\Omega , distributes over function maximization, \Phi. The elements are the
identity elements for function maximization and composition, respectively.
be two pairs in the representation of a function. We can
remove since then for all x and m, min(x
m)
m). Proper application of this observation that can greatly simplify the
representation of a function.
5.2 The Acyclic Time Separation of Events Algorithm
We can now present the algebraic formulation of McMillan and Dill's algorithm for computing
\Delta( b -). For each place and transition in - we compute a pair [f; m] where f 2 F and m 2 R[f1g.
The algorithm is shown in Figure 8.
Informally, this algorithm works as follows: To maximize the value of -(t dst we need to
find a timing assignment that maximizes -(t dst ) and minimizes -(t src ). The first element of [f
represents the longest path (using D(s)) from a transition to t dst and the second element represents
the shortest path (using \Gammad(s)) to t src . The algebra for the f 0 -part is complicated by the fact that
the delay for a given place can not be assigned both d(s) and D(s). The f 0 -part must represent the
longest path respecting the delays assigned by the shortest path computation. For details see [13].
To find the maximum separation represented by a [f; m] pair, we evaluate f at m and 0, computing
the sum of the longest and shortest paths. To compute \Delta( b
-), we maximize over all [f; m] pairs at
the initial marking:
\Delta( b
is the pair at s 2 ffl-g ;
where ffl- denotes the set fs similarly, -ffl denotes the set fs 2 S - j ;g.
5.3 Decomposition
The algebraic formulation allows for a decomposition of the above computation using matrices.
Consider a process segment - having We represent the computation of the
algorithm on - by two n \Theta m matrices, F and M. Given a vector of m-values at -ffl, m, we can
Algorithm: \Delta(b-)
For each element of - in backward topological order:
For a place s, compute the pair [f
ae 0 if
(m)ig\Omega f where [f; m] is the pair stored at
ae
is the pair stored at
For a transition t, compute the pair [f
ae
dst
\Phiff at place s j s 2 tfflg otherwise
ae
\Phi 0 fm at place s j s 2 tfflg otherwise
Figure
8: The algorithm for computing \Delta(b-).
find the vector m-values at ffl-, m 0 , from the (\Phi 0
m. This is
illustrated using the process segment - 1 from Figure 5, shown in Figure 9. We associate the delay
range [1; 2] to each place, i.e., places s. We compute expressions for
refers to the m-value computed for
element x in the process. In backwards topological order of - 1 we compute:
\Gamma1\Omega 0 m(s 5 ) from substitution
substitution
as\Omega 0 distributes over \Phi 0 and is associative
We can represent this computation in matrix form using (\Phi 0
Figure
9: Process segment - 1 from Figure 5.
A similar matrix is constructed for the f-part. For the process segment - 1 the computation is:
)ig\Omega f(s 5 ) from substitution
substitution
from substitution
)ig\Omega fh2; m(s 5
\Delta\Omega
distributes over \Phi
by definition
of\Omega
These expressions depend on the m-values at internal elements of - 1 , e.g., m(t 1 ) in the expression
for f(s 0 ). The m-value for these nodes can be computed as a linear expression in the m-values at
the places in -ffl. This linear expression is encoded by a vector u of length j-fflj. The vector
product
computes the m-value for the internal node where u is stored. E.g., the expression
used in f(s 0 ) is represented by a the vector:
We express the f-computation in matrix form using
Given a process segment -, we denote the corresponding function and m-value matrices by F(-)
and M(-). The algebraic element [-] is then defined as the singleton set f[F(-); M(-)]g. We can
now define the two operators fi and j. The choice operator is defined as set union:
The composition operator is more complex. When composing two segments - 1 and - 2 , the functions
in - 1 need to refer to the m-values in - 2 ffl rather than those at - 1 ffl. We shift the functions in
to make them refer to m-values in - 2 ffl by multiplying the u-vectors in F(- 1 ) with M(- 2 ). For a
singleton function fhl; uig, we obtain the function fhl;
)ig. Non-singleton functions are
shifted by shifting each pair, and a matrix of functions is shifted elementwise. We use the notation
F -M to denote a shift of matrix F by matrix M. For singleton sets the composition operator is
defined as:
\Theta
Non-singleton sets are multiplied out by applying the distributive law.
5.4 Pruning
Consider the element f[F ]g. We can removed [F from the set if we can show
that for any pairs composed to the left and right such that the result is a scalar, this scalar is no
greater than the same composition with the [F sufficient condition for eliminating
is the following: Let
i.e., k is the largest difference between elements in M 2 and M 1 , or 0 if this difference is negative.
The
where 1 is a row-vector of appropriate length with all entries set to 1. This condition is used to
eliminate entire execution paths from further analysis, and is central to obtaining an efficient algo-
rithm. More sophisticated conditions, that use more information about the particular computation,
are possible and may further increase the efficiency of the algorithm.
5.5 Upper Bound Computation
We now consider how to determine an upper bound [v] ?k for node, v, in the process automaton. To
determine a non-trivial upper bound, all further backward paths from v to v 0 have to be considered,
i.e., we need to bound the infinite set of algebraic elements constructed from backward paths:
oe
For any simple path p we just compute [-(p)]. If p is not simple, we write p as
3 is a simple path, p 2 is a simple cycle, and p 1 is finite, but may contain cycles. We introduce an
upper bound operator, r, with the property that
1. Thus, the expression on the right-hand side is an upper
bound on the left-hand side expression. The r operator is recursively applied to the path
until this is a simple path. Hence, we can bound the infinite set in (5) by a finite set of algebraic
elements constructed from all paths consisting of a simple cycle followed by a simple path ending
at v.
The r operator is defined as follows: Assume F is a m+ k \Theta n + k matrix of the form
0m
where 0 i is a vector of lenght i containing 0 and I k is the identity matrix of size k. The operator
r[F; M] is defined as
hi
z m is a vector of length m containing the function z = fh1; 1ig. The function z is a largest
element of F , i.e., z - f for all functions f 2 F . The effect of the r operator is to apply the
function z to the part of the F matrix which is not the identity.
The upper bound is determined individually for each pair in the set for node v. If the upper
bound for a given [F; M] pair is less than or equal to the present global lower bound, \Delta -k , that
pair can be removed from the set, further pruning the backward execution paths that must be
considered.
The order in which [F; M] pairs are multiplied greatly affects the run-time of the algorithm. For
example, consider precomputing for each node in the process automaton the algebraic expression
for the upper bound, i.e., for each node, compute the algebraic element for the set of simple paths
followed by simple cycles (going backwards). Because we don't know what is to be composed with
these elements, few pairs can be pruned from the representation. Therefore it may be more efficient
to multiply the pairs out in each iteration, even though this doesn't allow the reuse of work from
previous iterations. Our experience has been that upper bound expressions become very large when
precomputed and we are better off recomputing them at each iteration because effective pruning
takes place. We only precompute the r of the simple cycles. This observation was key to achieving
polynomial run-time for the example described in the following section.
6 Benchmark Example: The Eager Stack
Replicating a single process in a linear array provides an efficient implementation of a last-in, first-out
memory which we refer to as an eager stack. The eager stack contains an interesting mixture
of choice and concurrency and represents an excellent parameterizable example for explaining analyses
that can be performed by our algorithm, and also benchmarking our implementation of the
algorithm.
6.1 The Eager Stack
A stack capable of storing n elements is constructed from n equivalent processes, arranged in a
linear array. Each process has four ports, In, Out , Put , and Get . The ports Put and Get connect
to the ports In and Out, respectively, in the stage to the right. Figure 10 shows a block diagram
Put
Get
Environment 3-stage eager stack
In
Out
Put
Get
In
Out
Put
Get
In
Out
Figure
10: Block diagram of the 3-stage eager stack.
The CSP specification of a single stage is:
In
The Boolean variables b and rb are used to control communication with the adjacent right stage.
The value of b indicates whether this stage holds valid data. The value of rb is a mirror of the value
of b in the stage to the right. Concurrency occurs when a position must be created or a space must
be filled in.
The choice of whether to do a Put or Get is made in the environment and is potentially
propagated throughout the entire stack. In order to avoid an overflow or underflow condition, the
environment interacting with the stack must not attempt a Put if n elements are already stored in
the stack and it must not attempt a Get if the stack is empty. The following process represents a
suitable environment:
E(P ut; Get) j
This process is unfolded n times and the actual data (x) is eliminated for simplicity. For
get:
The construct is repeated if a guard command with a trailing is chosen, and
is not repeated otherwise. The number in parenthesis refers to the number of items in the stack
at the time when the communication is performed, so after Put (2) the stack is full and only a Get
communication is possible 1 .
A nice property of this example is that the port names occur the same number of times and along
compatible choice paths in adjacent processes. Thus we can identify a (superscripted) number with
each occurrence of a port name in the program. We use Petri net transition P i for communications
on the Put port in stage i and In port in stage i + 1. Similarly, G i denotes a communications on
the Get port in stage i and on the Out port in stage Figure 10).
1 It is possible to have the stack indicate whether it is empty or full and make the environment behave accordingly,
but this complicates each stage of the stack.
6.2 Timing Analysis
There are numerous interesting time separation analysis we can perform on the eager stacks. We
can determine the minimum and maximum separations between consecutive Put communications
in the environment process. The maximum separation analysis for a 3-stage stack would correspond
to:
If we set the delay between communication actions to be the range [1; 2], we get the maximum
separation This is obtained by filling an empty stack (three Put operations) and then
emptying it again (three Get operation), finally inserting one element (a Put operation). The
maximum separation is achieved between the third and the fourth Put operation. For the minimum
separation, we exchange the sets To and From and negate the result, in this case
A possibly more interesting analysis might be the minimum and maximum separations between
consecutive Put or Get communications. This corresponds to the minimum and maximum response
time of the stack, or equivalently, the minimum and maximum cycle period of the environment.
We must include all Petri net transitions corresponding to Put and Get communications in the
environment. Thus
The results, again for [1; 2] delay ranges between communication actions, are
For fixed delay values, the eager stack has constant response time, i.e., the time between the
environment performs either a Put or a Get operation until the next such operation is independent
of the size of the stack, n. This is not the case when we introduce uncertainty in the delay values.
The maximum response time turns out to be n linear in the stack size. However, if we
look at the maximum response time amortized over m Put or Get communication actions we get
the following maximum separations \Delta:
Dividing \Delta by m, we obtain the amortized separations shown below:
\Delta=m 6 5 4.66 4.5 4.4 4.33 4.29
We can predict that for 4. So although the maximum separation between
two consecutive operations increases linearly with n, if we amortize over a number of operations,
the response time converges to 4. In fact, the maximum response time converges to 4 independently
of n. In this sense, the eager stack has constant response time even when the delays are uncertain.
6.3 Run Time
Execution times of the CTSE algorithm on eager stacks of various sizes, n, are shown in Table 2 using
[1; 2] for all delay ranges. The size of the specification, i.e., number of places, number of transitions,
and the size of the flow relation, is given n the table by jS \Sigma j, jT \Sigma j, and jF \Sigma j, respectively. The
number of nodes in the reachability graph is shown in the jR:G:j column. Note that the reachability
graph is not constructed when performing the timing analysis and is only reported to give an idea
of the complexity of the nets. The separation analysis denoted by \Delta 1 is the maximum separation
between consecutive Put operations and \Delta 2 is the maximum separation between consecutive Put
or Get operations. The CPU times were obtained on a Sparc 10 with 256 MB of memory.
.3
4
43 176 268 2.4 2.3
28 1220 813
36 2000 1333 5366
Table
2: Run times of the CTSE algorithm on eager stack of various sizes.
Figure
11 shows the CPU times for the two separation analysis plotted as a function the size
of the Petri net.
Orbits [19] is, to the authors knowledge, the most developed and efficient tool for answering
temporal questions about Petri nets specifications. Orbits constructs the timed reachability graph,
i.e., the states reachable given the timing information. It should be noted that Orbits is capable
of analyzing a larger class of Petri net specifications than the one described here. Partial order
techniques are also used in Orbits to reduce the state space explosion [20]. However, the time to
construct the timed reachability graph for the eager stack increases exponentially with the size n.
For 6 the time is 234 CPU seconds on a Decstation 5000 with 256 MB, i.e., two orders of
magnitude slower than the CTSE algorithm. For Orbits ran out of memory.
7 Conclusion
We have described an algorithm for solving an important time separation problem on a class of
Petri nets that contains both choice and concurrency. In practice, our algorithm is able to analyze
nets of considerable size, demonstrated by an example whose Petri net specification consists of
more than 3000 nodes and 10 reachable states. While we report a polynomial run-time result for
only a single parameterizable example, we expect similar results for other specifications exhibiting
limited choice and abundant concurrency.
Acknowledgments
We thank Chris Myers of Stanford University for many fruitful discussions as well as supplying
the Orbits runtimes. This work is supported by an NSF YI Award (MIP-9257987) and by the
110010000
Petri net size, jF \Sigma j
Figure
11: Double logarithmic plot of CPU time for the two separation analyses as
a function of the Petri net size, jF \Sigma j.
DARPA/CSTO Microsystems Program under an ONR monitored contract (N00014-91-J-4041).
--R
The theory of timed automata.
Synchronization and Linearity.
Modeling and verification of time dependent systems using time Petri nets.
Its relation to nets and to CSP.
Partial order behavior and structure of Petri nets.
Interleaving and partial orders in concurrency: A formal comparison.
Trace Algebra for Automatic Verification of Real-Time Concurrent Systems
Minimum and maximum delay problems in real-time systems
Computer Aided Verification
Using partial orders to improve automatic verification methods.
Timing analysis of digital circuits and the theory of min-max functions
Practical applications of an efficient time separation of events algorithm.
An algorithm for exact bounds on the time separation of events in concurrent systems.
Programming in VLSI: From communicating processes to delay-insensitive circuits
Algorithms for interface timing verification.
Synthesis of timed asynchronous circuits.
Petri Net Theory and The Modeling of Systems.
Performance evaluation of asynchronous concurrent systems using Petri nets.
Automatic verification of timed circuits.
Modular Construction and Partial Order Semantics of Petri Nets.
--TR
--CTR
Ken Stevens , Shai Rotem , Steven M. Burns , Jordi Cortadella , Ran Ginosar , Michael Kishinevsky , Marly Roncken, CAD directions for high performance asynchronous circuits, Proceedings of the 36th ACM/IEEE conference on Design automation, p.116-121, June 21-25, 1999, New Orleans, Louisiana, United States | asynchronous systems;concurrent systems;time separation of events;timing analysis;abstract algebra |
268421 | Understanding the sources of variation in software inspections. | In a previous experiment, we determined how various changes in three structural elements of the software inspection process (team size and the number and sequencing of sessions) altered effectiveness and interval. Our results showed that such changes did not significantly influence the defect detection rate, but that certain combinations of changes dramatically increased the inspection interval. We also observed a large amount of unexplained variance in the data, indicating that other factors must be affecting inspection performance. The nature and extent of these other factors now have to be determined to ensure that they had not biased our earlier results. Also, identifying these other factors might suggest additional ways to improve the efficiency of inspections. Acting on the hypothesis that the inputs into the inspection process (reviewers, authors, and code units) were significant sources of variation, we modeled their effects on inspection performance. We found that they were responsible for much more variation in detect detection than was process structure. This leads us to conclude that better defect detection techniques, not better process structures, are the key to improving inspection effectiveness. The combined effects of process inputs and process structure on the inspection interval accounted for only a small percentage of the variance in inspection interval. Therefore, there must be other factors which need to be identified. | Introduction
Software inspection has long been regarded as a simple, effective, and inexpensive way of detecting
and removing defects from software artifacts. Most organizations follow a three-step procedure
of Preparation, Collection, and Repair. First, each member of a team of reviewers reads the
artifact, detecting as many defects as possible (Preparation). Next, the review team meets, looks
for additional defects, and compiles a list of all discovered defects (Collection). Finally, these defects
are corrected by the artifact's author (Repair).
Several variants of this method have been proposed in order to improve inspection performance.
Most involve restructuring the process, e.g., rearranging the steps, changing the number of people
working on each step, or the number of times each step is executed. Some of these variants
have been evaluated empirically. However, focus has been on their overall performance. Very few
investigations attempted to isolate the effects due specifically to structural changes. However, we
must know which effect are caused by which changes in order to determine the factors that drive
inspection performance, to understand why one method may be better than another, and to focus
future research on high-payoff areas.
Therefore, we conducted a controlled experiment in which we manipulated the structure of the
inspection process[20]. We adjusted the size of the team and the number of sessions. Defects
were sometimes repaired in between multiple sessions and sometimes not. Comparing the effects of
different structures on inspection effectiveness and interval 1 indicated that none of the structural
changes we investigated had a significant impact on effectiveness, but some changes dramatically
increased the inspection interval.
Regardless of the treatment used, both the effectiveness and interval data seemed to vary widely. To
strengthen the credibility of our previous study and to deepen our understanding of the inspection
process, we must now study this variation.
1.1 Problem Statement
We are asking two questions: (1) Are the effects of process structure obscured by other sources of
variation, i.e., is the "signal" swamped by "noise"? (2) Are the effects of other factors more influential
than the effects of process structure, i.e., are researchers focusing on the wrong mechanisms?
To answer the first question, we will attempt to separate the effects of some external sources of
variation from the effects due to changes in the process structure. By eliminating the effects of
external variation we will have a more accurate picture of the effects of our experimental treatments.
Also, by understanding the external variation we may be able to evaluate how well our experimental
design controlled for it, which will aid the design of future experiments.
To answer the second question, we will compare the variation due to process structure with that
due to other sources. If the other sources are more influential than process structure, then it may
Inspections have many different costs and benefits. In this study we restricted our discussion of benefits to the
number of defects found, and costs to inspection interval (the time from the start of the inspection to its completion)
and person effort.
be possible to significantly improve inspections by properly manipulating them. We expect that
identifying and understanding these sources will aid the development of better inspection methods.
Therefore, we have extended the results of our experiment by identifying some sources of variation
and modeling their influence on inspection effectiveness and interval. We will show that our previous
results do not change even after these sources of variation are accounted for. This analysis also
suggests some improvements for the inspection process and raises some implications about past
research and future studies.
1.2 Analysis Philosophy
We hope to identify mechanisms that drive the costs and benefits of inspections so that we can
engineer better inspections. To do this we will rely heavily on statistical modeling techniques.
However, these techniques are not completely automated. Therefore, we must make judgments
about which variables or combinations of variables to allow in the models. These choices are
guided by our desire to create models that are robust and interpretable.
To improve robustness we avoided fitting the data with too many factors. Doing so could result in
a model that explains much of the variation in the current data, but has no predictive power when
used on a different set of data.
To improve interpretability we omitted factors for which we have no readily available measure. We
also omitted factors whose effects were known to be confounded with other factors in the model.
Finally, we rejected models for which, based on our experience, we could not argue that their
variables were causal agents of inspection performance. Specifically, there are four conditions that
must be satisfied before factor A can be said to cause response B[12]:
1. A must occur before B.
2. A and B must be correlated.
3. There is no other factor C that accounts for the correlation between A and B.
4. A mechanism exists that explains how A affects B.
One implication of all these is that the "best" model for our purpose is not necessarily the one that
explains the largest amount of variation. Throughout this research we have chosen certain models
over others. Some were rejected because a smaller, but equally effective model could be found, or
because one variable was strongly confounded with another, or because a variable failed to show a
causal relationship with inspection performance. We will point out these cases as they arise.
2 Summary of Experiment
With the cooperation of professional developers working on an actual software project at Lucent
Technologies (formerly AT&T Bell Labs), we conducted a controlled experiment to compare the
costs and benefits of making several structural changes to the software inspection process. (See
et al.[20] for details.) The project was to create a compiler and environment to support
developers of Lucent Technologies' 5ESS(TM) telephone switching system. The finished compiler
contains over 55K new lines of C++ code, plus 10K which was reused from a prototype. (See
Appendix
A for a description of the project.)
The inspector pool consisted of the 6 developers building the compiler plus 5 developers working
on other projects. 2 They had all been with the organization for at least 5 years and had similar
development backgrounds. In particular, all had received inspection training at some point in their
careers. Data was collected over a period of (June 1994 to December 1995), during
which 88 code inspections were performed.
2.1 Experimental Design
We hypothesized that (1) inspections with large teams have longer intervals, but find no more
defects than smaller teams; (2) multiple-session inspections 3 are more effective than single-session
inspections, but at the cost of a significantly longer interval; and (3) although repairing the defects
found in each session of a multiple-session inspection before starting the next session will catch
even more defects, it will also take significantly longer than multiple sessions meeting in parallel.
We manipulated these independent variables: the number of reviewers (1, 2, or 4); the number
of sessions (1 or 2); and, for multiple sessions, whether to conduct the sessions in parallel or
in sequence. The treatments were arrived at by selecting various combinations of these (e.g., 1
session/4 reviewers, 2 sessions/2 reviewers without repair, etc.
Among the dependent variables measured were inspection effectiveness-in terms of observed number
of defects, as explained in Appendix B-and inspection interval-in terms of working days from
the time the code was made available for inspection up to the collection meeting. 4
2.2 Conducting the Experiment
To support the experiment, one of us joined the development team in the role of inspection quality
engineer (IQE). He was responsible for tracking the experiment's progress, capturing and validating
data, and observing all inspections. He also attended the development team's meetings, but had
no development responsibilities.
When a code unit was ready for inspection, the IQE randomly assigned a treatment and randomly
drew the review team from the inspector pool. In this way, we attempted to control for differences
in natural ability, learning rate, and code unit quality.
In addition, 6 more developers were called in at one time or another to help inspect 1 or 2 pieces of code, mostly
to relieve the regular pool during the peak development periods. It is common practice to get non-project developers
to inspect code during peak periods.
3 In this experiment, we used the term "session" to mean one cycle of the preparation-collection-repair process. In
multiple-session inspections, different teams inspect the same code unit.
4 For 2-session inspections, the longer interval of the two is selected.
The names of the reviewers were then given to the author, who scheduled the collection meeting.
If the treatment called for 2 sessions, the author scheduled 2 separate collection meetings. If repair
was required between the 2 sessions, then the second collection meeting was not scheduled until
the author had repaired all defects found in the first session.
The reviewers were expected to prepare sufficiently before the meeting. During preparation, reviewers
did not merely acquaint themselves with the code, but carefully examined it for defects.
They were not given any specific technical roles (e.g., tester or end-user) nor any checklists. On an
individual preparation form, they recorded the time spent on preparation, and the page and line
number and the description of each issue (each "suspected" defect). 5 The experiment placed no
limit on preparation time.
For the collection meeting one reviewer was selected as the moderator and another as the reader.
The moderator ran the meeting and recorded administrative data on a moderator report form.
This comprised the name of the author, lines of code inspected, hours spent testing the code before
inspection, and inspection team members. The reader paraphrased the code. During this activity,
reviewers brought up any issues found during preparation or briefly discussed newly discovered
issues. On a collection form, the code unit's author recorded the page and line number and description
of each issue regarded as valid, as well as the start and end time of the collection meeting.
Each valid issue was tagged with a unique Issue ID. If a reviewer had found that particular issue
during preparation, he or she recorded that ID next to the issue on his or her preparation form.
This enabled us to trace issues back to the reviewers who found them. No limit was placed on
meeting duration, although most lasted less than 2 hours.
After the collection meeting, the author kept the collection form and resolved all issues. In the
process he or she recorded on a repair form the disposition (no change, fixed, deferred), nature (non-
issue, optional, requires change not affecting execution, requires change affecting execution), locality
(whether repair is isolated to the inspected code), and effort spent (-
each issue. Afterwards, the author returned all paperwork to us. We used the information from
the repair form and interviews with the author to classify each issue as a true defect (if the author
was required to make an execution affecting change to resolve it), soft maintenance issue (any other
issue which the author fixed), or false positive (any issue which required no action).
In the course of the experiment, several treatments were discontinued because they were either not
performing effectively, or were taking too long to complete. These were the 1-session, 1-person
treatment and all 2-session treatments which required repair between sessions.
After months, we managed to collect data from 88 inspections, with a combined total of 130
collection meetings and 233 individual preparation reports. The entire data set may be examined
online at http://www.cs.umd.edu/users/harvey/variance.html.
2.3 Self-Reported Data
Self-reported data tend to contain systematic errors. Therefore we minimized the amount of self-reported
data by employing direct observation[19] and interviews[2]. The IQE attended 125 of the
5 A sample of this, and all other forms we used may be found at http://www.cs.umd.edu/users/harvey/
variance.html.
collection meetings 6 to make sure the meeting data was reported accurately and that reviewers
do not mistakenly add to their preparation forms any issues that were not found until collection.
We also made detailed field notes to corroborate and supplement some of the data in the meeting
forms. The repair information was verified through interviews with the author, who completed the
form. Our defect classification was not made available to the reviewers or the authors to avoid
biasing them.
Among the data that remained self-reported were the amount of preparation time and pre-inspection
testing time expended. We had two concerns in dealing with these data: a participant might
deliberately fail to tell the truth (e.g., reporting 2 hours preparation time when he or she really
did not prepare at all); participants might make errors in recording data (e.g., reporting 2 hours of
preparation time when the correct figure was 1.9 hours).
During the experiment, the IQE had an office next to those of the compiler development team,
and after working with the team for months, a great deal of trust was built up. Also, the
development environment routinely collects self-reported data, which is unavailable to management
at the individual level. Thus developers are conditioned to answer as reliably as they can. We
therefore see no reason to suspect that participants ever deliberately misrepresented their data.
As for the element of error, previous observational studies on time usage conducted in this environment
have shown that although there are always inaccuracies in self-reported data, the self-reported
data is generally within 20% of the observed data[18].
2.4 Results of the Experiment
Our experiment produced three general results:
1. Inspection interval and effectiveness of defect detection were not significantly affected by team
size (large vs. small).
2. Inspection interval and effectiveness of defect detection were not significantly affected by
number of sessions (single vs. multiple).
3. Effectiveness of defect detection was not improved by performing repairs between sessions of
two-session inspections. However, inspection interval was significantly increased.
From this we concluded that single-session inspections by small teams were the most efficient, since
their defect detection rate was as good as that of other formats, and inspection interval was the
same or less.
The observed number of defects and the intervals per treatment are shown as boxplots 7 in Figures 1
and 2, respectively. The treatments are denoted [1,or 2] sessions X [1,2, or 4] persons [No-
6 The unattended ones are due to schedule conflicts and illness.
7 We have made extensive use of boxplots to represent data distributions. Each data set is represented by a box
whose height spans the central 50% of the data. The upper and lower ends of the box marks the upper and lower
quartiles. The data's median is denoted by a bold line within the box. The dashed vertical lines attached to the
box indicate the tails of the distribution; they extend to the standard range of the data (1.5 times the inter-quartile
range). All other detached points are "outliers."[5]
OBSERVED
1sX1p 1sX4p 2sX1pR 2sX2pR
Figure
1: Observed Number of Defects by Treatment. The treatment labels are interpreted
as follows: the first digit stands for the number of sessions, the second digit stands for the number
of reviewers per session, and, for 2-session inspections, the 'R' or `N' suffix indicates "with repair"
or "no repair". As seen here, the distributions all seem to be similar except for 1sX1p and 2sX2pR,
which were discontinued after 7 and 4 data points, respectively.
repair,Repair]. (For example, the label 2sX1pN indicates a two-session, one-person, without-repair
inspection.) It can be seen that most of the treatment distributions are similar but that they vary
widely within themselves.
3 Sources of Variation
3.1 Process Inputs as Sources of Variation
In addition to the process structure, we see that differences in process inputs (e.g., code unit and
reviewers) also affects inspection outcomes. Therefore, we will attempt to separate the effects of
process inputs from the effects of the process structure. To do this we will estimate the amount of
variation contributed by these process inputs.
Thus, our first question from Section 1.1 may be refined as, (1) How will our previous results
change when we eliminate the contributions due to variability in the process inputs? (2) Did our
experimental design spread the variance in process inputs uniformly across treatments?
Our second question then becomes, (1) Are the differences due to process inputs significantly larger
than the differences in the treatments? (2) If so, what factors or attributes affecting the variability
of these process inputs have the greatest influence?
Figure
3 is a diagram of the inspection process and associated inputs, e.g., the code unit, the
1sX1p 1sX4p 2sX1pR 2sX2pR
INTERVAL
(working
days)
Figure
2: Pre-meeting Interval by Treatment. As seen here, the distributions all seem to be
similar except for 2sX2pR, which was significantly higher.
Process Inputs
Collection Output
Inspection Output
Reviewers
Preparation
Collection
Issues
Code Unit
Author
Total Defects
Repair and
Classification
Issues
Suppressed
Issues
Meeting
Gains
Observed
Defects
Maintenance
False
Positives
Figure
3: A Cause and Effect Diagram of the Inspection Process. The inputs to the process
(reviewers, author, and code unit) are shown in grey rectangles on the left, the solid ovals represent
process steps, the grouped boxes in between steps show the intermediate outputs of the process.
Time flows left to right.
reviewers, and the author. It shows how these inputs interact with each process step. This is an
example of a cause-and-effect diagram, similar to the ones used in practice[13], but customized here
for our use.
The number and types of issues raised in the preparation step are influenced by the reviewers
selected and by the number of defects originally in the code unit (which in turn may be affected
by the author of the code unit). The number and types of issues recorded in the collection step
are influenced by the reviewers on the inspection team and the author (who joins the collection
meeting), the number of issues raised in preparation, and the number remaining undetected in the
Process Inputs
Collection Output
Inspection Output
Reviewers
Preparation
Collection
Issues
Prep Time
Code Unit
Author
Total Defects
Repair and
Classification
Discussions
Meeting
Duration
Issues
Suppressed
Issues
Meeting
Gains
Observed
Defects
Maintenance
False
Positives
Language
Familiarity
Application
Experience
Inspection
Experience
Type of
Change
Functionality
Code
Structure
Code Size
Pre-inspection
Testing
Figure
4: The Refined Cause and Effect Diagram. This figure extends the inspection model
with some of the factors which we believe to affect reviewer and author performance and code unit
quality.
code unit.
3.2 Factors Affecting Inspections
We considered the factors affecting reviewer and author performance and code unit quality that
might systematically influence the outcome of the inspection. (Some of these are shown in Figure 4.)
In 3.2.1 through 3.2.3, we examine these factors, explain how they might influence the number of
defects, and discuss confounding issues. 8 As we examine them, we caution the reader from making
conclusions about the significance of any factor as a source of variation. The goal here is to establish
possible mechanisms, not to test significance of correlation. Each plot is meant to be descriptive,
showing the relationship of a factor against the number of defects, without eliminating the influence
of potentially confounding factors. The actual test of a factor's significance will be carried out when
the model is built. (See Section 4.)
3.2.1 Code Unit Factors
Some of the possible variables affecting the number of defects in the code unit include: size, author,
time period when it was written, and functionality.
8 We did not have any readily available measure of experience nor code complexity, so we did not include them in
our analysis.
OBSERVED
Figure
5: Size vs. Defects Found. This is a scatter plot showing the relation between the size
of the code and the number of defects found (cor = 0.40). The line indicates the trend of the data.
Note that the plot was "jittered" - a small, random offset was added to each point - to expose
overlapping points. (In fact, every scatter plot in this paper that may have overlapping points was
jittered.)
Code Size. The size of a code unit is given in terms of non-commentary source lines (NCSL). It
is natural to think that, as the size of the code increases, the more defects it will contain. From
Figure
5 we see that there is some correlation between size and number of defects found
0.4). 9
Author. The author of the code may inadvertently inject defects into the code unit. There were
6 authors in the project. Figure 6 is a boxplot showing the number of defects found, grouped
according to the code unit's author. The number of defects could depend on the author's level of
understanding and implementation experience.
Development Phase. The performance of the reviewers and the number of defects in the code
unit at the time of inspection might well depend also on the state of the project when the inspection
was held. Figure 7 is a plot of the total defects found in each inspection, in chronological order.
Each point was plotted in the order the code unit became available for inspection. There are two
distinct distributions in the data. The first calendar quarter of the project (July - September 1994)
- which has about a third of the inspections - has a significantly higher mean than the remaining
period. This coincided with the project's first integration build. With the front end in place, the
development team could incrementally add new code units to the system, possibly with a more
precise idea of how the new code is supposed to interact with the integrated system, resulting in
fewer misunderstandings and defects. In our data, we tagged each code unit as being from "Phase
1" if they were written in the first quarter and "Phase 2" otherwise.
9 Correlations calculated in this paper are Pearson correlation coefficients.
OBSERVED
Figure
In Authors' Code Units. These boxplots show the total defects
found in each inspection, grouped according to the code unit's author.
At the end of Phase 1, we met with the developers to evaluate the impact of the experiment on
their quality and schedule goals. We decided to discontinue the 2-session treatments with repair
because they effectively have twice the inspection interval of 1-session inspections of the same team
size. We also dropped the 1-session, 1-person treatment because inspections using it found the
lowest number of defects.
Figure
8 shows a time series plot of the number of issues raised for each code unit inspection. While
the number of true defects being raised dropped as time went by, the total number of issues did
not. This might indicate that either the reviewers' defect detection performance were deteriorating
in time, or the authors were learning to prevent the true defects but not the other kinds of issues
being raised.
Functionality. Functionality refers to the compiler component to which the code unit belongs,
e.g., parser, symbol table, code generator, etc. Some functionalities may be more straightforward
to implement than others, and, hence, will have code units with lower number of defects. Figure 9
is a boxplot showing the number of defects found, grouped according to functionality.
Table
1 shows the number of code units each author implemented within each functional area.
Because of the way the coding assignments were partitioned among the development team, the
effects of functionality are confounded with the author effect. For example, we see in Figure 9
that SymTab has the lowest number of defects found. However, Table 1 shows that almost all the
code units in SymbTab were written by author 6, who has the lowest number of reported defects.
Nevertheless, we may still be able to speculate about the relative impact of the two factors by
examining those functionalities with more than one author (CodeGen) and authors implementing
more than one functionality (author 6).
In addition, functionality is also confounded with development phase as Phase 1 had most of the
INSPECTIONS
OBSERVED
DEFECTS515Jul 94
Dec 94
Mar 95
Jun 95
Figure
7: Defects Detected Over Time. This is a plot of the inspection results in chronological
order showing the trends in number of defects found over time. The vertical lines partition the
plot into calendar quarters. Within each quarter, the solid horizontal line marks the mean of that
quarter's distribution. The dashed lines mark one standard deviation above and below the mean.
Author
CodeGen 8 6 8 6 28
Report 3 3
I/O 9 9
Library 12 12
Misc 11 11
Parser 4 4
Table
1: Assignment of Authors to Functionality. Each cell gives the number of code units
implemented by an author for a functionality.
code for the front end functionalities (input-output, parser, symbol table) while Phase 2 had the
back end functionalities (code generation, report generation, libraries).
Because author, phase, and functionality are related, they cannot all be considered in the model
as they account for much of the same variation. In the end, we selected functionality as it is the
easiest to explain.
Pre-inspection Testing. The code development process employed by the developers allowed
Inspections
Total
Issues
Recorded2060Jul 94
Dec 94
Mar 95
Jun 95
Figure
8: Number of Issues Recorded Over Time. This is a time series plot showing the trends
in number of issues being recorded over time. The vertical lines partition the plot into quarters.
Within each quarter, the solid horizontal line marks the mean of that quarter's distribution. The
dashed lines mark one standard deviation above and below the mean. Note that the scale of the
y-axis is different from the previous figure.
them to perform some unit testing before the inspection. Performing this would remove some of
the defects prior to the inspection. Figure 10 is a scatter plot of pre-inspection testing effort against
observed defects in inspection (cor = 0.15). One would suspect that the number of observed defects
would go down as the amount of pre-inspection testing goes up, but this pattern is not observed in
Figure
10.
A possible explanation to this is that testing patterns during code development may have changed
across time. As the project progressed and a framework for the rest of the code was set up, it may
have become easier to test the code incrementally during coding. This may result in code which
has different defect characteristics compared to code that was written straight through. It would
be interesting to do a longitudinal study to see if these areas had high maintenance cost.
3.2.2 Reviewer Factors
Here we examine how different reviewers affect the number of defects detected. Note that we only
look at their effect on the number of defects found in preparation, because their effect as a group
is different in the collection meeting's setting.
Reviewer. Reviewers differ in their ability to detect defects. Figure 11 shows that some reviewers
find more defects than others. 10 Even for the same code unit, different reviewers may find different
In addition to the 11 reviewers, 6 more developers were called in at one time or another to help inspect 1 or 2
pieces of code, mostly to relieve the regular pool during the peak development periods. We did not include them in
CodeGen Report I/O Library Misc Optimizer Parser SymTab
FUNCTIONALITY
OBSERVED
Figure
9: Defects Found In Code Units Classified by Functionality. These boxplots show
the total defects found in each inspection, grouped according to the code unit's functionality. Note
that specific authors were assigned to implement specific portions of the project's functionalities, so
the effects of functionality is usually not separable from that of authors - the independent factors
of author and functionality are confounded. For example, SymTab, which has the lowest number of
defects found, was implemented by author 6, who has the lowest number of reported defects.
numbers of defects (Figure 12). This may be because they were looking for different kinds of
issues. Reviewers may raise several kinds of issues, which may either be suppressed at the meeting,
or classified as true defects, soft maintenance issues (issues which required some non-behavior-
affecting changes in the code, like adding comments, enforcing coding standards, etc.), or false
positives (issues which were not suppressed at the meeting, but which the author later regarded
as non-issues). Figure 13 shows the mean number of issues raised by each reviewer as well as the
percentage breakdown per classification. We see that some of the reviewers with low numbers of
true defects (see Figure 11), like Reviewers H and I, simply do not raise many issues in total.
Others, like Reviewers J and K, raise many issues but most of them are suppressed. Still others,
like Reviewers E and G, raise many issues but most turn out to be soft maintenance issues. The
members of the development team (Reviewers A to F) raise on average more total issues (see left
plot in Figure 13), though a very high percentage turn out to be soft maintenance issues (see right
plot in Figure 13), possibly because, as authors of the project, they have a higher concern for its
long-term maintainability than the rest of the reviewers. An exception is Reviewer F, who found
almost as many true defects as soft maintenance issues.
Preparation Time. The amount of preparation time is a measure of the amount of effort the
reviewer put into studying the code unit. For this experiment, the reviewers were not instructed
to follow any prescribed range of preparation time, but to study the code in as much time as they
think they need. Figure 14 plots preparation time against defects found, showing a positive trend
but little correlation
this analysis because they each had too few data points.
PRE-INSPECTION TESTING (HOURS)
OBSERVED
Figure
10: Pre-inspection Testing Effort vs. Defects Found. This is a scatter plot showing
how the amount of pre-inspection testing related to the number of defects found in inspection (cor
0.15). Note that the pre-inspection testing data was self-reported by the author. Points cluster
at the quarter hours because we asked the authors to only record to that precision.
Even if preparation time is found to be a significant contributor, it must be noted that preparation
time depends not only on the amount of effort the reviewer is planning to put into the preparation,
but also on factors related to the code unit itself. In particular, it is influenced by the number of
defects existing in the code, i.e., the more defects he finds, the more time he spends in preparation.
Hence, high preparation time may be considered a consequence, as well as a cause, of detecting a
large number of defects. Further investigation is needed to quantify the effect of preparation time
on defects found as well as the effect of defects found on preparation time. Because there is no
way to tell how much of the preparation time was due to reviewer effort or number of defects, we
decided not to include it in the model. This is also in keeping with our analysis philosophy to only
consider factors that occur strictly before the response. (See Section 1.2.)
3.2.3 Team Factors
Team-specific variables also add to the variance in the number of meeting gains.
Team Composition. Since different reviewers have different abilities and experiences, and possibly
interact differently with each other, different teams also differ in combined abilities and experiences
Apparently, this mix tended to form teams with nearly the same performance. This is illustrated
in
Figure
which shows number of defects found by different 2-person teams in each 2sX2pN
inspection. Most of the time, the two teams found nearly the same number of defects. This may
be due to some interactions going on between team members. However, because teams are formed
randomly, there are only a few instances where teams composed of the same people were formed
REVIEWER
IN
PREPARATION
Figure
11: Number of Defects in Preparation per Reviewer. This plot shows the number
of true defects found in preparation by each reviewer.
more than once, not enough to study the interactions.
We incorporated the team composition into the model by representing it as a vector of boolean
variables, one variable per reviewer in the reviewer pool. When a particular reviewer is in that
collection meeting, his corresponding variable is set to "True".
Meeting Duration. The meeting duration is the number of hours spent in the meeting. In the
one person is appointed the reader, and he reads out the code unit, paraphrasing each
chunk of code. The meeting revolves around him. At any time, reviewers may raise issues related to
the particular chunk being read and a discussion may ensue. All these contribute toward the pace
of the meeting. The meeting duration is positively correlated with the number of meeting gains,
as shown in Figure 16 (cor = 0.57). As with the case of preparation time, the meeting duration is
partly dependent on the number of defects found, as detection of more defects may trigger more
discussions, thus lengthening the duration. It is also dependent on the complexity or readability
of the code. Further investigation is needed to determine how much of the meeting duration is
due to the team effort independent of the complexity and quality of the code being inspected. For
similar reasons as with preparation time (see the previous discussion on preparation time), we did
not include this in the model.
Combined Number of Defects Found in Preparation. The number of defects already found
going into the meeting may also affect the number of defects found at the meeting. Each reviewer
gets a chance to raise each issue he found in preparation as a point of discussion, possibly resulting
in the detection of more defects. Figure 17 shows some correlation between number of defects found
in the preparation and in the meeting (cor = 0.4).
INSPECTIONS
OBSERVED
IN
PREPARATION
PER
Figure
12: Reviewer Performance Per Inspection. This shows the number of defects found in
preparation by each reviewer, grouped according to inspection. Each column represents one inspec-
tion. The points in that column represent the number of true defects reported during preparation
by each reviewer in that inspection. The columns were ordered according to increasing means.
4 A Model of Inspection Effectiveness
4.1 Building the Model
To explain the variance in the defect data, we built statistical models of the inspection process,
guided by what we knew about it. Model building involves formulating the model, fitting the
model, and checking that the model adequately characterizes the process. We built the models in
the S programming language[3, 6].
Using the factors described in the previous section, we modeled the number of defects found with
a generalized linear model (GLM) from the Poisson family. 11 We started with a model which had
all code unit factors, all reviewers, and the original treatment factors, represented by the following
RA +RB +RC +RD +RE +R F +RG +RH +R I +R J +RK (1)
In this model, Functionality and Author are categorical variables represented in S as sets of dummy
11 The generalized linear model and the rationale for using it are explained in Appendix C.
We used S language notation to represent our models[6, pp. 24-31]. For example, the model formula y -
is read as, "y is modeled by a, b, and c."
SUPPRESSED ISSUES
NUMBER
OF
ISSUES
RAISED
IN
PREPARATION
REVIEWER
OF
ISSUE
Figure
13: Classification of Issues Found in Preparation. The bar graph on the left shows
the mean number of issues found in preparation by each reviewer, broken down according to issue
classification. The bar graph on the right shows the percentage breakdown.
PREPARATION TIME (HOURS)
OBSERVED
IN
PREPARATION
Figure
14: Preparation Time vs. Defects Found In Preparation. This is a scatter plot
showing how the amount of preparation time related to the number of defects found in preparation
pp. 20-22,32-36]. They have 7 and 5 degrees of freedom, respectively.
Stepwise model selection heuristic 13 selected the following model.
13 Stepwise model selection techniques are a heuristic to find the best-fitting models using as few parameters as
possible. To avoid overfitting the data, the number of parameters must always be kept small or the residual degrees
of freedom high. To perform stepwise model selection we used the step() function in S[6, pp. 233-238].
INSPECTIONS (2sX2pN only)
OBSERVED
PER
Figure
15: Team Performance Per Inspection (2sX2pN only). This shows the total number
of defects found per session in each 2sX2pN inspection. Each column represents one inspection.
The points in that column represent the total number of true defects reported in preparation and
meeting by each team. "1" and "2" plot the number of defects found by the first and second teams,
respectively. The columns are ordered by mean defects found.
RB +RC +R F +RG +RH +R I
This resulting model is not satisfactory because it retained many factors, making it difficult to
interpret. Also, even though these factors were considered important by the stepwise selection
criteria, some of them do not explain a lot of the variance. So we increased the selection threshold
to produce a smaller model. 14 Increasing the selection threshold did not simplify the model initially,
until, at one point, a large number of factors were suddenly dropped. The resulting model then
was:
It must be noted that the factors left out of the model are not necessarily unimportant. We believe
that there are other possible models for our data. In particular, Phase was considered important.
Phase is a surrogate variable representing the change in defects being found over time. Figure 7
clearly showed that something had changed over time but it is not clear what caused it. The
reason why this change over time explains a significant part of the variability may be attributable
to other factors. It is not clear which mechanism explains why Phase affects the number of defects.
14 In S, increase the scale parameter of the step() function.
OBSERVED
IN
Figure
Meeting Duration vs. Defects Found in Meeting. This is a scatter plot showing
how the amount of time spent in the meeting related to the number of defects found in the meeting
We also knew that Phase was confounded with Functionality (e.g., parser was implemented before
code generator). Since we knew also that some parts of the compiler are harder to implement than
others, the effects due to Functionality are easier to interpret than the effects due to Phase. Thus
we replaced Phase by Functionality in our final model:
The analysis of variance for this model is in Table 2. For comparison, the treatment factors were
added to the model. See Appendix C for details on calculating the significance values. The resulting
model explains - 50% of the variance using just 10 degrees of freedom.
In this model, Defects is the number of defects found in each of the 88 inspections. Note that
the presence of certain reviewers (Reviewers B and F) in the inspection team strongly affects the
outcome of the inspection. (See Table 2.) Note also the log transformation on the Code Size factor.
We do not really know what the actual underlying functional relationship is between Code Size
and Defects and so we applied square root, logarithmic, and linear transformations. Code Size
explained more variance under the log transformation than under other transformations.
Figure
diagnostic plots of the model's goodness-of-fit. The left plot shows the values
estimated by the model compared to the original values. It shows that the model reasonably
estimates the number of defects. The right plot shows the values estimated by the model compared
to the residuals. The residuals appear to be independent of the fitted values, suggesting that the
residuals are random.
DEFECTS OBSERVED IN PREPARATION
OBSERVED
IN
Figure
17: Defects Found in Preparation vs. Defects Found in Meeting. This is a scatter
plot showing how the combined amount of defects found in the preparation related to the number
of defects found in the meeting (cor = 0.4).
4.2 Lower Level Models
The inspection model is a high level description of the inspection defect detection process. The
effects of the process input and of the process structure can be compared using this model. But we
also know that defect detection in inspections is performed in two steps: preparation and collection.
These two steps may be considered as independent processes which can be modeled separately.
Doing so has several advantages. We can understand the resulting models of the simpler separate
processes better than the model for the composite inspection process. In addition, there are more
data points to fit - 233 individual preparations and 130 collection meetings, as opposed to 88
inspections.
4.2.1 A Model for Defect Detection During Preparation
To build the preparation model, we started with the same variables as in inspection model 1. Since
the same code unit was inspected several times, we added a categorical variable, CodeUnit, to the
regression model. CodeUnit is a unique ID for each code unit inspected.
Using stepwise model selection, we selected the variables that significantly affect the variance in the
preparation data. These were Functionality, Size, and Reviewers B, E, F, and J. This is represented
by the model formula:
repDefects
In this model, P repDefects is the number of defects found in each of the 233 preparation reports.
Factor Degrees of Sum of F Value Pr(F) Effect
Freedom Squares
Treatment Team Size 2 2.65 0.50 0.6062
factors Sessions 1 1.12 0.43 0.5146
Input log(Code Size) 1 59.63 22.66
factors Functionality 7 43.76 2.38 0.0303
Residuals 73 192.11
Table
2: Factors Affecting Inspection Effectiveness. The sum of squares measure the relative
contribution of each factor to the variance of the defect data. The probabilities indicate the significance
of the contribution. The last column for each significant scalar factor indicates whether
the factor was a positive or negative contributor to the number of defects. (Functionality had 7
degrees of freedom and different functionalities had different effects.)
SQRT(FITTED VALUE)
Figure
18: Examining the fit of the model. The left plot compares the values estimated by
the model with the original values (a perfect fit would imply that everything is on the line
There is a substantial correlation between the two 0.69). The right plot shows the relation
of the fitted values to the residuals. The residuals appear to be independent of the fitted values.
The presence of all the significant factors from the overall model at this level gives us more confidence
on the validity of the overall model.
4.2.2 A Model for Defect Detection During Collection
We started with the same variables as in preparation model. (See previous section.) Using stepwise
model selection to select the variables that significantly affect the meeting data we ended up with
Functionality, Size, and the presence of Reviewers B, F, H, J, and K. This is represented by the
RESIDUAL
(a)
NUMBER OF SESSIONS
RESIDUAL
(b)
REPAIR POLICY
RESIDUAL
(c)
Figure
19: Examining the Significance of the Experimental Treatment Factors. These
three panels depict the distribution of the residual data grouped according to Team Size, Sessions,
and Repair.
model formula:
MeetingGains
In this model, MeetingGains is the number of defects found in each of the 130 collection meetings.
This is again consistent with the previous two models.
4.3 Answering the Questions
We are now in a position to answer the questions raised in Section 3.1 with respect to inspection
effectiveness.
4.3.1 Will previous results change when process inputs are accounted for?
In this analysis, we build a GLM composed of the significant process input factors plus the treatment
factors and check if their contributions to the model would be significant.
The effect of increasing team size is suggested by plotting the residuals of the overall inspection
model, grouped according to Team Size (Figure 19(a)). We observe no significant difference in the
distributions. When we included the Team Size factor into the model, we saw that its contribution
was not significant (p = 0:6, see Table 2). 15
Appendix
C Section C.3.1 describes how Tables 2 and 3 were constructed.
The effect of increasing sessions is suggested by plotting the residuals of the overall inspection
model, grouped according to Session (Figure 19(b)). We observe no significant difference in the
distributions. When we included the Session factor into the model, we saw that its contribution
was not significant (p = 0:5).
The effect of adding repair is suggested by plotting the residuals of the overall inspection model
(for those inspections that had 2 sessions), grouped according to Repair policy (Figure 19(c)). We
observe no significant difference in the distributions. When we included the Repair factor into the
model, we saw that its contribution was not significant (p = 0:2).
4.3.2 Did design spread process inputs uniformly across treatments?
We want to determine if the factors of the process inputs which significantly affect the variance are
spread uniformly across treatments. This is useful in evaluating our experimental design. Although
randomization guarantees that the long run distribution of the factors will be independent of the
treatments, we had a single set of 88 data points. Thus we felt it is important to know of any
imbalances in this particular randomization.
As an informal sanity check we took each of the significant factors in the overall inspection model
and tested if they are independent of the treatments. For each factor, we built a contingency table,
showing the frequency of occurrence of each value of that factor within each treatment. We then
used Pearson's - 2 -test for independence[4, pp. 145-150]. If the result is significant, then the factor
is not independently distributed across the treatments. Although the counts in the table cells are
too low for this - 2 -test to be valid, we use it as informal means to indicate gross nonuniformities
in the assignment of treatments.
Results show that the distribution of Reviewer B is independent of treatment
Functionality may be unevenly assigned to treatments.
Examining further shows us that Reviewer F never got to do any 1sX1p inspections, and that
Functionality was not distributed evenly because some functionalities were implemented earlier
than others, when there were more treatments.
Contingency tables only work with data which have discrete values. To test the independence of
log(Size) to treatment, we modeled it instead with a linear model, log(Size) - T reatment, to
determine if treatment contribution to log(Size) is significant. The ANOVA result
that it is not, indicating that there is no dependence between code sizes and treatment.
4.3.3 Are differences due to process inputs larger than differences due to process
structure?
Table
2 shows the analysis of variance for our model. The significance of the treatment factors'
contribution were included for comparison.
The table shows that differences in code units and reviewers drive inspection performance more
than differences in any of our treatment variables. This suggests that relatively little improvement
in effectiveness can be expected of additional work on manipulating the process structure.
4.3.4 What factors affecting process inputs have the greatest influence?
The dominance of process inputs over process structure in explaining the variance also suggests
that more improvements in effectiveness can be expected by studying the factors associated with
reviewers and code units that drive inspection effectiveness.
Differences in code units strongly affect defect detection effectiveness. Therefore, it is important to
study the attributes that influence the number of defects in the code unit. Of the code unit factors
we studied, code size was the most important in all the models. This is consistent with the accepted
practice of normalizing the defects found by the size of the code. The next most important factor
is functionality. This may indicate that code functionalities have different levels of implementation
difficulty, i.e., some functionalities are more complex than others. Because functionality is confounded
with authors, it may also be explained by differences in authors. And because it is also
confounded with development phase, another possible explanation is that code functionalities implemented
later in the project may have less defects due to improved understanding of requirements
and familiarity with implementation environment.
The choice of people to use as reviewers strongly affects the defect detection effectiveness of the
inspection. The presence of certain reviewers (in particular, Reviewer F) is a major factor in all
the models. It suggests that improvements in effectiveness may be expected by selecting the right
reviewers or by studying the characteristics and background of the best reviewers and the implicit
techniques by which they study code and detect defects.
5 A Model of Inspection Interval
Using the same set of factors, we also built a statistical model for the interval data. We measured
the interval from submission of the code unit for inspection up to the holding of the collection
meeting. Unlike defect detection, we do not see any further decomposition of the inspection process
that drives the interval. The author schedules the collection meeting with the reviewers and the
reviewers spend some time before the meeting to do their preparation. So instead of splitting the
inspection process into preparation and collection, we just modeled the interval from submission
to meeting.
A linear model was constructed from the factors described in the previous section. 16 We started
by modeling interval with the same initial set of factors as in the previous section. Using stepwise
model selection heuristic we arrived at the following model.
Even though we ended up with a small set of factors, the model was hard to interpret. It did not
make sense for Functionality to be an important factor influencing the length of the inspection
interval. In addition Functionality and Phase were confounded so they may be explaining part
of the same variance. Our belief was that they were masking the effect of the other confounded
The linear model was used here rather than the generalized linear model because the original interval data
approximates the normal distribution.
Factor Degrees of Sum of F Value Pr(F) Effect
Freedom Squares
Treatment Team Size 2 206.6 0.85 0.4308
factors Sessions 1 161.6 1.28 0.2619
Input Author 5 2195.0 3.62 0.0054
factors R I 1 242.1 2.00
Residuals 77 9340.86
Table
3: Factors Affecting Interval. The sum of squares measure the deviation contributed
by each factor to the mean of the interval data. The probabilities indicate the significance of the
contribution. The last column for each scalar factor in the model indicates whether the factor was
a positive or negative contributor to the interval. (Author had 5 degrees of freedom and different
authors had different effects.)
factor, Author. It makes more sense for Author to be in the model since he is the central person
coordinating the inspection. So we re-ran the stepwise model selection heuristic, instructing it to
always retain the Author factor. The result was:
Interval - Author +R I +Repair
In this model, Interval is the number of days from availability of code unit for inspection up to
the last collection meeting.
The analysis of variance for this model is in Table 3. For comparison, all the treatment factors were
added to the model. The model explains - 25% of the variance using just 7 degrees of freedom.
The low explanatory power of the model indicates the limited extent to which structure and inputs
affect interval and suggests that other factors (that were not observed in this study) are more
important in determining the interval. The presence of Repair confirms our earlier experimental
result stating that adding repair in between inspections increases the interval.
5.1 Model Checking
Figure
20 gives diagnostic plots of the model's goodness-of-fit. The left plot shows the values
estimated by the model compared to the original values. Because the model only explains 25% of
the variance, it has limited predictive capabilities. The right plot shows the values estimated by
the model compared to the residuals. The residuals appear to be independent of the fitted values.
5.2 Answering the Questions
We are now in a position to answer the questions raised in Section 3.1, with respect to inspection
FITTED VALUE
FITTED VALUE
Figure
20: Examining the fit of the model. The left plot compares values estimated by the
model with the original values (a perfect fit would imply that everything is on the line
There is some correlation between the two 0.48). The right plot shows the relation of the
fitted values to the residuals. The residuals appear to be independent of the fitted values.
5.2.1 Will previous results change when process inputs are accounted for?
In this analysis, we build a linear model, composed of the significant process input factors plus the
treatment factors and check if their contributions to the model are significant.
The effect of increasing team size is suggested by plotting the residuals of the interval model
consisting only of input factors, grouping them according to Team Size (Figure 21(a)). We observe
no significant difference in the distributions. When we included the Team Size factor into the
model, we saw that its contribution was not significant (p = 0:4, see Table 3).
The effect of increasing sessions is suggested by plotting the residuals of the interval model consisting
only of input factors, grouping them according to Session (Figure 21(b)). We observe no significant
difference in the distributions. When we included the Session factor into the model, we saw that
its contribution was not significant (p = 0:3).
The effect of adding repair is suggested by plotting the residuals of the interval model consisting
only of input factors (for those inspections that had 2 sessions), grouping them according to Repair
policy (Figure 21(c)). We have already seen that Repair has a significant contribution
to the model in the previous section and this is supported by the plot.
5.2.2 Are differences due to process inputs larger than differences due to process
structure?
Table
3 shows the factors affecting inspection interval and the amount of variance in the interval
that they explain. We can see that some treatment factors and some process input factors
contribute significantly to the interval. Among treatment factors Repair contributes significantly
to the interval. This shows that while changes in process structure do not seem to affect defect
detection, it does affect interval.
RESIDUAL
INTERVAL
(a)
NUMBER OF SESSIONS
RESIDUAL
INTERVAL
(b)
REPAIR POLICY
RESIDUAL
INTERVAL
(c)
Figure
21: Examining the Significance of the Experimental Treatment Factors. These
three panels depict the distribution of the residual data grouped according to Team Size, Sessions,
and Repair.
5.2.3 What factors affecting process inputs have the greatest influence?
The results of modeling interval show that process inputs explain only - 25% of the variance in
inspection interval even after accounting for process structure factors. Clearly, other factors, apart
from the process structure and inputs affect the inspection interval. Some of these factors may
stem from interactions between multiple inspections, developer and reviewer calendars, and project
schedule and may reveal a whole new class of external variation which we will call the process
environment. These are beyond the scope of the data we observed for this study but they deserve
further investigation.
6 Conclusions
6.1 Intentions and Cautions
Our intention has been to empirically determine the influence upon defect detection effectiveness
and inspection interval resulting from changes in the structure of the software inspection process
(team size, number of sessions, and repair between multiple sessions). We have extended the
analysis to study as well the influence of process inputs.
All our results were obtained from one project, in one application domain, using one language
and environment, within one software organization. Therefore we cannot claim that our conclusions
have general applicability until our work has been replicated. We encourage anyone
interested to do so, and to facilitate their efforts we have described the experimental conditions
as carefully and thoroughly as possible and have provided the instrumentation online. (See
http://www.cs.umd.edu/users/harvey/variance.html.)
6.2 The Ratio of Signal to Noise in the Experimental Data
Our proposed models of the inspection process proved useful in explaining the variance in the data
gathered from our previous experiment. From them we could show that the variance was caused
mainly by factors other than the treatment variables. When the effects of these other factors were
removed, the result was a data set with significantly reduced variance across all of the treatments,
which improved the resolution of our experiment. After accounting for the variance (noise) caused
by the process inputs, we showed that the results of our previous experiment do not change (we
see the same signal).
This has several implications for the design and analysis of industrial experiments. Past studies
have cautioned that wide variation in the abilities of individual developers may mask effects due to
experimental treatments[8]. However, even with our relatively crude models, we managed to devise
a suitable means of accounting for individual variation when analyzing the experimental results.
But ultimately, we will get better results only if we can identify and control for factors affecting
reviewer and author performance.
Note also that the overall drop in defect data over time (see Figure 7) underscores the fact that
researchers doing long term studies must be aware that some characteristics of the processes they
are examining may change during the study.
6.3 The Need for a New Approach to Software Inspection
When process inputs are accounted for, the results of the experiment show that differences in
process structure have little effect on defect detection. This reinforces the results of our previous
experiment. That work showed that single session inspection by a small team is the most efficient
structure for the software inspection process (fewest personnel and shortest interval, with no loss
of effectiveness-see summary in Section 2.4 above).
If this is the case, and we believe that it is, then further efforts to increase defect detection rates
by modifying the structure of the software inspection process will produce little improvement.
Researchers should therefore concentrate on improving the small-team-single-session process by
finding better techniques for reviewers to carry it out (e.g., systematic reading techniques[1] for the
preparation step, meetingless techniques[9, 17, 11] for the collection step, etc.
7 Future Work
7.1 Framework For Further Study
Our study revealed a number of influences affecting variation in the data, some internal and some
external to the inspection process.
Internal sources included factors from the process structure (the manner in which the steps are
organized into a process, e.g., team sizes, number of sessions, etc.), and from the process techniques
(the manner in which each step is carried out, the amount of effort expended, and the methods
used, e.g., reading techniques, computer support, etc.
External sources included factors from the process inputs (differences in reviewers' abilities and in
code unit quality) and from the process environment (changes in schedules, priorities, workload,
etc.
7.2 Premise for Improving Inspection Effectiveness
We believe that to develop better inspection methods we no longer need to work on the way the
steps in the inspection process are organized (structure), but must now investigate and improve
the way they are carried out by reviewers (technique).
7.3 Need for Continued Study of Inspection Interval
We have not yet adequately studied the factors affecting interval data. Some of the factors are
found in process structure (specifically repairing in between sessions) and process inputs, but much
of its variance is still unaccounted for. To address this, we must examine the process environment,
including workloads, deadlines, and priorities.
Acknowledgments
We would like to recognize Stephen Eick and Graham Wills for their contributions to the statistical
analysis. Art Caso's editing is greatly appreciated.
--R
IEEE Trans.
A methodology for collecting valid software engineering data.
The New S Language.
Graphical Methods For Data Analysis.
Hastie, editors. Statistical Models in S.
Model uncertainty
Substantiating programmer variability.
Computer brainstorms: More heads are better than one.
Estimating software fault content before coding.
An instrumented approach to improving software quality through formal technical review.
Correlation and Causality.
Successful Industrial Experimentation
Software research and switch software.
Two application languages in software produc- tion
Generalized Linear Models.
Electronic meeting systems to support group work.
Experimental software engineer- ing: A report on the state of the art
Understanding and improving time usage in software development.
An experiment to assess the cost-benefits of code inspections in large scale software development
Assessing software designs using capture-recapture methods
--TR
A Two-Person Inspection Method to Improve Programming Productivity
Electronic meeting systems
An experimental study of fault detection in user requirements documents
Estimating software fault content before coding
An improved inspection technique
Assessing Software Designs Using Capture-Recapture Methods
An experiment to assess the cost-benefits of code inspections in large scale software development
Experimental software engineering
An instrumented approach to improving software quality through formal technical review
Active design reviews
Statistical Models in S
Comparing Detection Methods for Software Requirements Inspections
--CTR
Frank Padberg, Empirical interval estimates for the defect content after an inspection, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
Miyoung Shin , Amrit L. Goel, Empirical Data Modeling in Software Engineering Using Radial Basis Functions, IEEE Transactions on Software Engineering, v.26 n.6, p.567-576, June 2000
Dewayne E. Perry , Adam Porter , Michael W. Wade , Lawrence G. Votta , James Perpich, Reducing inspection interval in large-scale software development, IEEE Transactions on Software Engineering, v.28 n.7, p.695-705, July 2002
Trevor Cockram, Gaining Confidence in Software Inspection Using a Bayesian Belief Model, Software Quality Control, v.9 n.1, p.31-42, January 2001
Stefan Biffl , Michael Halling, Investigating the Defect Detection Effectiveness and Cost Benefit of Nominal Inspection Teams, IEEE Transactions on Software Engineering, v.29 n.5, p.385-397, May
Oliver Laitenberger , Colin Atkinson, Generalizing perspective-based inspection to handle object-oriented development artifacts, Proceedings of the 21st international conference on Software engineering, p.494-503, May 16-22, 1999, Los Angeles, California, United States
Oliver Laitenberger , Thomas Beil , Thilo Schwinn, An Industrial Case Study to Examine a Non-Traditional Inspection Implementation for Requirements Specifications, Empirical Software Engineering, v.7 n.4, p.345-374, December 2002
James Miller , Fraser Macdonald , John Ferguson, ASSISTing Management Decisions in the Software Inspection Process, Information Technology and Management, v.3 n.1-2, p.67-83, January 2002
Lionel C. Briand , Khaled El Emam , Bernd G. Freimut , Oliver Laitenberger, A Comprehensive Evaluation of Capture-Recapture Models for Estimating Software Defect Content, IEEE Transactions on Software Engineering, v.26 n.6, p.518-540, June 2000
Bruce C. Hungerford , Alan R. Hevner , Rosann W. Collins, Reviewing Software Diagrams: A Cognitive Study, IEEE Transactions on Software Engineering, v.30 n.2, p.82-96, February 2004
Andreas Zendler, A Preliminary Software Engineering Theory as Investigated by Published Experiments, Empirical Software Engineering, v.6 n.2, p.161-180, June 2001 | software process;software inspection;empirical studies;statistical models |
268898 | Asynchronous parallel algorithms for test set partitioned fault simulation. | We propose two new asynchronous parallel algorithms for test set partitioned fault simulation. The algorithms are based on a new two-stage approach to parallelizing fault simulation for sequential VLSI circuits in which the test set is partitioned among the available processors. These algorithms provide the same result as the previous synchronous two stage approach. However, due to the dynamic characteristics of these algorithms and due to the fact that there is very minimal redundant work, they run faster than the previous synchronous approach. A theoretical analysis comparing the various algorithms is also given to provide an insight into these algorithms. The implementations were done in MPI and are therefore portable to many parallel platforms. Results are shown for a shared memory multiprocessor. | Introduction
Fault simulation is an important step in the electronic design
process and is used to identify faults that cause erroneous
responses at the outputs of a circuit for a given test set. The
objective of a fault simulation algorithm is to find the fraction
of total faults in a sequential circuit that is detected by a given
set of input vectors (also referred to as fault coverage).
In its simplest form, a fault is injected into a logic circuit by
setting a line or a gate to a faulty value (1 or 0), and then the effects
of the fault are simulated using zero-delay logic simula-
tion. Most fault simulation algorithms are typically of O(n 2 )
time complexity, where n is the number of lines in the circuit.
Studies have shown that there is little hope of finding a linear-time
fault simulation algorithm [1].
In a typical fault simulator, the good circuit (fault-free cir-
cuit) and the faulty circuits are simulated for each test vec-
tor. If the output responses of a faulty circuit differ from those
of the good circuit, then the corresponding fault is detected,
and the fault can be dropped from the fault list, speeding up
simulation of subsequent test vectors. A fault simulator can
This research was supported in part by the Semiconductor Research Corporation
under Contract SRC 95-DP-109 and the Advanced Research Projects
Agency under contract DAA-H04-94-G-0273 and DABT63-95-C-0069 administered
by the Army Research Office.
be run in stand-alone mode to grade an existing test set, or it
can be interfaced with a test generator to reduce the number
of faults that must be explicitly targeted by the test generator.
In a random pattern environment, the fault simulator helps in
evaluating the fault coverage of a set of random patterns. In
either of the two environments, fault simulation can consume
a significant amount of time, especially in random pattern test-
ing, for which millions of vectors may have to be simulated.
Thus, parallel processing can be used to reduce the fault simulation
time significantly. We propose in this paper two scalable
asynchronous parallel fault simulation algorithms with
the test vector set partitioned across processors. This paper is
organized as follows. In Section 2, we describe the various
existing approaches to parallel fault simulation and we motivate
the need for a test set partitioned approach to parallel fault
simulation. In Section 3, we discuss our approach to test sequence
partitioning. In Section 4, we present the various algorithms
that have been implemented including the two proposed
asynchronous algorithms. A theoretical analysis of the
sequential and parallel algorithms proposed is given in Section
5 to provide a deeper insight into the algorithms. The results
are presented in Section 6, and all algorithms are compared.
Section 7 is the conclusion.
Parallel Fault Simulation
Due to the long execution times for large circuits, several
algorithms have been proposed for parallelizing sequential
circuit fault simulation [2]. A circuit partitioning approach to
parallel sequential circuit fault simulation is described in [3].
The algorithm was implemented on a shared-memory multi-
processor. The circuit is partitioned among the processors,
and since the circuit is evaluated level-by-level with barrier
synchronization at each level, the gates at each level should be
evenly distributed among the processors to balance the work-
loads. An average speedup of 2.16 was obtained for 8 proces-
sors, and the speedup for the ISCAS89 circuit s5378 was 3.29.
This approach is most suitable for a shared-memory architecture
for circuits with many levels of logic.
Algorithmic partitioning was proposed for concurrent fault
simulation in [4][5]. A pipelined algorithm was developed,
and specific functions were assigned to each processor. An
estimated speedup of 4 to 5 was reported for 14 processors,
based on software emulation of a message-passing multicomputer
[5]. The limitation of this approach is that it cannot take
advantage of a larger number of processors.
Fault partitioning is a more straightforward approach to
parallelizing fault simulation. With this approach [6][7], the
fault list is statically partitioned among all processors, and
each processor must simulate the good circuit and the faulty
circuits in its partition. Good circuit simulation on more than
one processor is obviously redundant computation. Alterna-
tively, if a shared-memory multiprocessor is used, the good
circuit may be simulated by just one processor, but the remaining
processors will lie idle while this processing is performed,
at least for the first time frame. Fault partitioning may also
be performed dynamically during fault simulation to even out
the workloads of the processors, at the expense of extra inter-processor
communication [6]. Speedups in the range 2.4-3.8
were obtained for static fault partitioning over 8 processors
for the larger ISCAS89 circuits having reasonably high fault
coverages (e.g., s5378 improvements
were obtained for these circuits with dynamic fault partitioning
due to the overheads of load redistribution [6]. However,
in both the static and dynamic fault partitioning approaches,
the shortest execution time will be bounded by the time to perform
good circuit logic simulation on a single processor.
One observation that can be made about the fault partitioning
experiments is that larger speedups are obtained for circuits
having lower fault coverages [6][7]. These results highlight
the fact that the potential speedup drops as the number of
faults simulated drops, since the good circuit evaluation takes
up a larger fraction of the computation time. The good circuit
evaluation is not parallelized in the fault partitioning ap-
proach, and therefore, speedups are limited. For example, if
good circuit logic simulation takes about 20 percent of the total
fault simulation time on a single processor, then by Am-
dahl's law, one cannot expect a speedup of more than 5 on any
number of processors.
Parallelization of good circuit logic simulation, or simply
logic simulation, is therefore very important and it is known to
be a difficult problem. Most implementations have not shown
an appreciable speedup. Parallelizing logic simulation based
on partitioning the circuit has been suggested but has not been
successful due to the high level of communication required between
parallel processors.
Recently, a new algorithm was proposed, where the test
vector set was partitioned among the processors [8]. We will
call this algorithm, SPITFIRE1. Fault simulation proceeds
in two stages. In the first stage, the fault list is partitioned
among the processors, and each processor performs fault simulation
using the fault list and test vectors in its partition. In
the second stage, the undetected fault lists from the first stage
are combined, and each processor simulates all faults in this
list using test vectors in its partition. Obviously, the test set
partitioning strategy provides a more scalable implementa-
tion, since the good circuit logic simulation is also distributed
over the processors. Test set partitioning is also used in the
parallel fault simulator Zamlog [9], but Zamlog assumes that
independent test sequences are provided which form the par-
tition. If only one test sequence is given, Zamlog does not
partition it. If, for example, only 4 independent sequences
are given, it cannot use more that 4 processors. Our work
does not make any assumption on the independence of test sequences
and hence is scalable to any number of processors.
It was shown in [8] [10] that the synchronous two-stage al-
gorithm, SPITFIRE1, performs better than fault partitioned
parallel approaches. Other synchronous algorithms, SPIT-
FIRE2 and SPITFIRE3, which are extensions of the SPIT-
FIRE1 algorithm, were presented in [10]. SPITFIRE3, in
particular, is a synchronous pipelined approach which helps
in overcoming any pessimism that may exist in a single or
two stage approach. We propose in this paper, two new asynchronous
algorithms, based on the test set partitioning strategy
for parallel fault simulation. We will demonstrate that
the asynchronous algorithms perform better than their synchronous
counterparts and shall provide reasons for the same.
The first algorithm, SPITFIRE4, is a two stage algorithm, and
it is a modification of the SPITFIRE1 algorithm described
above. It leaves the first stage unchanged, but the second stage
is implemented with asynchronous communication between
processors. The second algorithm, SPITFIRE5, obviates the
need for two stages. The entire parallel fault simulation strategy
is accomplished in one stage with asynchronous communication
between processors.
3 Test Sequence Partitioning
Parallel fault simulation through test sequence partitioning
is illustrated in Figure 1. We use the terms test set and
test sequence interchangeably here, and both are assumed to
be an ordered set of test vectors. The test set is partitioned
Example: A Test Sequence of 5n vectors on 5 Processors
Test Sequence
Processors
3n
Figure
1. Test Sequence Partitioning
among the available processors, and each processor performs
the good and faulty circuit simulations for vectors in its partition
only, starting from an all-unknown (X) state. Of course,
the state would not really be unknown for segments other than
the first if we did not partition the vectors. Since the unknown
state is a superset of the known state, the simulation will be
correct but may have more X values at the outputs than the serial
simulation. This is considered pessimistic simulation in
the sense that the parallel implementation produces an X at
some outputs which in fact are known 0 or 1. From a pure
logic simulation perspective, this pessimism may or may not
be acceptable. However, in the context of fault simulation,
the effect of the unknown values is that a few faults which are
detected in the serial simulation are not detected in the parallel
simulation. Rather than accept this small degree of pes-
simism, the test set partioning algorithm tries to correct it as
much as possible.
To compute the starting state for each test segment, a few
vectors are prepended to the segment from the preceding seg-
ment. This process creates an overlap of vectors between successive
segments, as shown in Figure 1. Our hypothesis is that
a few vectors can act as initializing vectors to bring the machine
to a state very close to the correct state, if not exactly the
same state. Even if the computed state is not close to the actual
state, it still has far fewer unknown values than exist when
starting from an all-unknown state. Results in [8] showed that
this approach indeed reduces the pessimism in the number of
fault detections. The number of initializing vectors required
depends on the circuit and how easy it is to initialize. If the
overlap is larger than necessary, redundant computations will
be performed in adjacent processors, and efficiency will be
lost. However, if the overlap is too small, some faults that are
detected by the test set may not be identified, and thus the fault
coverage reported may be overly pessimistic.
4 Parallel Test Partitioned Algorithms
We now describe four different algorithms for test set partitioned
parallel fault simulation. The first two algorithms are
parallel single-stage and two-stage synchronous approaches
which have been proposed earlier[8][10]. The third and
fourth algorithms are parallel two-stage and single-stage asynchronous
approaches.
4.1 SPITFIRE0: Single Stage Synchronous Algorith
In this approach, the test set is partitioned across the processors
as described in the previous section. This algorithm is
presented as a base of reference for the various test set partitioning
approaches to be described later. The entire fault list is
allocated to each processor. Thus, each processor targets the
entire list of faults using a subset of the test vectors. Each processor
proceeds independently and drops the faults that it can
detect. The results are merged in the end.
4.2 SPITFIRE1: Synchronous Two Stage Algorithm
The simple algorithm described above is somewhat inefficient
in that many faults are very testable and are detected by
most if not all of the test segments. Simulating these faults on
all processors is a waste of time. Therefore, one can filter out
these easy-to-detect faults in an initial stage in which both the
set and the test set are partitioned among the processors.
This results in a two stage algorithm. In the first stage, each
processor targets a subset of the faults using a subset of the
test vectors, as illustrated in Figure 2. A large fraction of the
F
F
F
F
F
5224Partitioning in Stage 1
U
Partitioning in Stage 2
U
U
U
U
Figure
2. Partitioning in SPITFIRE1
detected faults are identified in this initial stage, and only the
remaining faults have to be simulated by all processors in the
second stage. This algorithm was proposed in [8]. The overall
algorithm is outlined below.
1. Partition test set T among p processors:
g.
2. Partition fault list F among p processors:
g.
3. Each processor P i performs the first stage of fault simulation
by applying T i to F i . Let the list of detected faults
and undetected faults in processor P i after fault simulation
be C i and U i respectively.
4. Each processor P i sends the detected fault list C i to processor
5. Processor P 1 combines the detected fault lists from
other processors by computing
now broadcasts the total detected fault list
C to all other processors.
7. Each processor P i finds the list of faults it needs to target
in the second stage G
8. Reset the circuit.
9. Each processor P i performs the second stage of fault
simulation by applying test segment T i to fault list G i .
10. Each processor P i sends the detected fault list D i to processor
11. Processor P 1 combines the detected fault lists from
other processors by computing
. The result
after parallel fault simulation is the list of detected
faults C
S D, and it is now available in processor P 1 .
Note that G
equivalent expression
for G i . The reason that a second stage is necessary
is because every test vector must eventually target every undetected
fault if it has not already been detected on some other
processor. Thus, the initial fault partitioning phase is used to
reduce redundant work that may arise in detecting easy-to-
detect faults. It can be observed though that one has to perform
two stages of good circuit simulation with the test segment
on any processor. However, the first stage eliminates
a lot of redundant work that might have been otherwise per-
formed. Hence, the two-stage approach is preferred. The test
set partitioning approach for parallel fault simulation is subject
to inaccuracies in the fault coverages reported only when
the circuit cannot be initialized quickly from an unknown state
at the beginning of each test segment. This problem can be
avoided if the test set is partitioned such that each segment
starts with an initialization sequence.
The definitive redundant computation in the above approach
is the overlap of test segments for good circuit simu-
lation. However, if the overlap is small compared to the size
of the test segment assigned to a processor, then this redundant
computation will be negligible. Another source of redundant
computation is in the second stage when each processor
has to target the entire list of faults that remains (excluding the
faults that were left undetected in that processor). In this situ-
ation, when one of the processors detects a fault, it may drop
the fault from its fault list, but the other processors may continue
targeting the fault until they detect the fault or until they
complete the simulation (i.e., until the second stage of fault
simulation ends). This redundantcomputationoverheadcould
be reduced by broadcasting the fault identifier, corresponding
to a fault, to other processors as soon as the fault is detected.
However, the savings in computation might be offset by the
overhead in communication costs.
4.3 SPITFIRE4: A Two Stage Asynchronous Algorith
We will now describe an asynchronousversion of the Algorithm
SPITFIRE1. Consider the second stage of fault simulation
in Algorithm SPITFIRE1. All processors have to work
on almost the same list of undetected faults that was available
at the end of the first stage (except faults that it could not detect
in Stage 1). It would therefore be advantageous for each
processor to periodically communicate to all other processors
a list of any faults that it detects. Thus, each processor asynchronously
sends a list of new detected faults to all other processors
provided that it has detected at least MinFaultLimit
new faults. Each processor periodically probes for messages
from other processors and drops any faults that may be received
through messages. This helps in reducing the load on
a processor if it has not detected these faults yet. Thus, by
allowing each processor to asynchronously communicate detected
faults to all other processors, we dynamically reduce the
load on each processor.
It should be observed that in the first stage of Algorithm
SPITFIRE1, all processors are working on different sets of
faults. Hence, there is no need to communicate detected faults
during Stage 1, since this will not have any effect on the work-load
on each processor. It would make sense therefore to communicate
all detected faults only at the end of Stage 1. The
asynchronous algorithm used for fault simulation in Stage 2
by any processor P i is outlined below.
For each vector k in the test set T i
FaultSimulate vector k
(NumberOfNewFaultsDetected ? MinFaultLimit) then
Send the list of newly detected faults to all
Processors using a buffered asynchronous send
while (CheckForAnyMessages())
Receive new message using a blocking receive
Drop newly received faults (if not dropped earlier)
while
end for
The routine CheckForAnyMessages() is a non-blocking
probe which returns a 1 only if there is a message pending to
be received.
4.4 SPITFIRE5: A Single Stage Asynchronous Algorith
It is possible to employ the same asynchronous communication
strategy used in the algorithm SPITFIRE4 for the algorithm
SPITFIRE0. In the latter algorithm, all processors start
with the same list of undetected faults, which is the entire list
of faults F . Only faults which each processor may detect get
dropped, and each processor continues to work on a large set
of undetected faults. Once again, it would make sense for each
processor to communicate detected faults periodically to other
processors provided that it has detected at least MinFaultLimit
new faults.
The value of MinFaultLimit is circuit dependent. It also depends
on the parallel platform that may be used for parallel
fault simulation. For a very small circuit with mostly easy to
detect faults, it may not make sense to set MinFaultLimit too
small, as this may result in too many messages being commu-
nicated. On the other hand, if the circuit is reasonably large,
or if faults are hard to detect, the granularity of computation
between two successive communication steps will be large.
Therefore, it may make sense to have a small value of Min-
FaultLimit. Similarly, it may be more expensive to communicate
often on a distributed parallel platform such as a network
of workstations. However, this factor may not matter as much
on a shared memory machine. Our results were obtained on a
shared memory multiprocessor where the value of MinFault-
Limit was empirically chosen to be 5 as we will show. This
means that whenever any processor detects at least 5 faults, it
will communicate the new faults detected over to other processors
to possibly reduce the load on other processors that may
still be working on these faults.
It is therefore important to ensure that the computation to
communication ratio be kept high and hence depending on
the parallel platform used, one needs to arrive at a compromise
at the frequency at which faults are communicated between
processors. One may also use the number of vectors
in the test set that have been simulated, say MinVectorLimit,
as a control parameter to regulate the frequency of synchro-
nization. This may be useful towards the end of fault simulation
when there faults are detected very slowly. One can also
use both parameters, MinFaultLimit and MinVectorLimit, sim-
ulaneously and communicate faults if either control parameter
is exceeded. As long as the granularity of the computation is
large enough compared to the communication costs involved,
one can expect a good performance with an asynchronous ap-
proach. If we assume that communication costs are zero, then
one would ideally communicate faults as soon as they are detected
to other processors. If the frequency of communication
is reduced, then one may have to perform more redundant
computation.
There is a tradeoff between algorithms SPITFIRE4 and
SPITFIRE5. As we can see in SPITFIRE4, we have a completely
communication independent phase in Stage 1 followed
by an asynchronous communication intensive phase. However
in SPITFIRE5, we have only one stage of fault simula-
tion. This means that the good circuit simulation with test set
T i on processor P i needs to be performed only once. Thus, although
we may have continuous communication in algorithm
SPITFIRE5, we may obtain substantial savings by performing
only one stage of fault simulation. We will see in the next
section that this is indeed the case.
The same approach for asynchronous communication that
was discussed in the previous section is used for this algo-
rithm. However, the asynchronous communication is applied
to the first and only stage of fault simulation that is used for
this algorithm.
5 Analysis of Algorithms
A theoretical analysis of the various algorithms is now pre-
sented. We first provide an analysis of serial fault simulation
and then extend the analysis for various test set partitioning
approaches and for a fault partitioning approach.
5.1 Analysis of Sequential Fault Simulation
We first provide an analysis for a uniprocessor and then
proceed to an analysis for a multiprocessor situation.
Let us assume that there are N test vectors in the test set f
g. Usually in fault simulation, many faults are
detected early, and then the remaining faults are detected more
slowly. Let us assume that the fraction of faults detected by
vector k in the test set is given by ffe \Gamma-(k\Gamma1) , i.e. the fraction
of faults detected at each step falls exponentially. (Tradition-
ally one assumes that the fraction of faults left undetected after
the k'th vector has been simulated is given by ff 1 e \Gamma-k [3] [11].
Hence, the fraction of faults detected by vector k is given by
which is of
the form ffe \Gamma-(k\Gamma1) . ) Then the fraction of faults detected at
this stage after (n-1) vectors have been simulated is given by
1\Gammae \Gamma- . Hence, the number of undetected faults
remaining, U(n) when the n'th vector has to be simulated is
given by
the total number of faults in the circuit. Let us assume that
fl is the unit of cost for execution in seconds per gate evalu-
ation. Assume that a fraction ffi of the total number of gates
G in the circuit is being simulated for each fault. Then the
cost for simulating all the faulty circuits left with the n'th vector
is fl ffiGU(n). Assume that a fraction fi of the gates G are
simulated for the good circuit logic simulation for each vector.
(Usually ffi !! fi. This is because, for fault simulation, only
the events that are triggered in the fanout cone starting from
the node where the fault was inserted need to be processed.)
Then the fault simulation cost for simulation of the n'th vector
is given by fl(fiG ffiGU(n)). Thus the total fault simulation
cost on a uniprocessor, T 1 (N; F; G), for simulating N vectors
is given by
Since N is large, we may approximate 1\Gammae \Gamma-N by 1. We then
find that T 1 (N; F;
Neglecting r in relation to N , we obtain
5.2 Analysis of Algorithm SPITFIRE0
In a single stage test set partitioned parallel algorithm, each
processor simulates N
vectors where o is the vector overlap
between processors. Each processor also starts with the
same number of faults F . In a single stage synchronous algo-
rithm, communication occurs only in the end where all processors
exchange all detected faults. This can be neglected
in comparison to the total execution time. Therefore, the total
execution cost T p;sync;1stage (N; F; G; o) can be approximated
as,
G;
The above formula shows that this approach is scalable but
that one has to pay for the redundant computation performed
with the vector overlap factor o.
5.3 Analysis of a Fault Partitioning Algorithm
In a fault partitioning algorithm, each processor simulates
N vectors and targets F
faults. Hence, the execution cost in
this case is of the form
F
This formula demonstrates that the fault partitioned approach
is not scalable since the first term, which corresponds to the
good circuit logic simulation performed, does not scale across
the processors. Eventually, this factor will bound the speedup
as the number of processors is increased.
Table
1. Uniprocessor Execution Times
Pri- Pri- Random Test Set Actual Test
mary mary Size 10000 from an ATPG tool
Inp- Out- Flip Time Faults Test Set Time Faults
Circuit Faults Gates uts puts Flops (secs) Detected Size (secs) Detected
Table
2. Execution Time on 8 processors of SUN-SparcCenter1000E shared memory multiprocessor
Random Test Test
Faults Execution Time (secs) Faults Execution Time (secs)
Circuit Faults Detected SPF0 SPF1 SPF4 SPF5 Detected SPF0 SPF1 SPF4 SPF5
s526 555 52 16.7 10.4 13.3 8.2 445 8.5 5.1 3.0 2.3
div16 2141 1640 19.2 26.0 24.8 13.4 1801 53.7 28.3 21.1 13.2
pcont2 11300 6829 113.8 137.7 140.7 104.4 6837 116.3 110.3 75.5 52.6
piir8 19920 15004 285.8 271.3 265.0 230.9 15069 51.1 48.4 44.3 33.6
5.4 Analysis of Algorithm SPITFIRE5
For a single stage asynchronous algorithm, we may assume
that, due to communication, all processors are aware of all detected
faults before a test vector is input. We assume for the
purposes of analysis that MinF aultLimit = 1. This means
that a processor broadcasts the new faults that it has dropped
every time it detects at least 1 fault. If the faults detected by all
processors at each stage are all different, then the number of
faults detected after (n-1) vectors have been simulated is given
by In this case,
G;
where C p;async;1stage (N; F communication cost
involved. In reality, some faults are multiply detected by
more than one processor, and the factor pff in the above formula
will be smaller. Also, since there is some delay in
each processor obtaining information about faults dropped on
other processors, this factor may be even smaller. Also, if
MinF aultLimit is large, this delay may be even longer, and
the factor pff would have to be scaled down further. A smaller
value of pff indicates a longer execution time. Let us assume
that the communication cost is of the form- 1 +- 2 l where - 1 is
the startup cost in seconds, - 2 is the cost in seconds per computer
word, and l is the length of the message. Since F ffr(1 \Gamma
are detected after the n'th vector has been simulated
by each of the p processors, the total cost of communication
is given by C p;async;1stage (N; F
Note that if MinF aultLimit is large, then the number of
messages is smaller, and the term (- 1 (N=p would be
scaled down. However, the term (- would remain
unchanged, since that is the total amount of data that is
communicated. Clearly, there is a tradeoff involved in increasing
the value of MinF aultLimit.
5.5 Analysis of SPITFIRE1 and SPITFIRE4
It is easy to show, using similar analysis, that the total
execution cost for the two stage algorithms, SPITFIRE1
and SPITFIRE4, can be obtained by replacing F by F ( 1
e \Gamma-( N
by counting the good circuit simulation cost
twice, in the formulas for the execution cost for the algorithms
SPITFIRE0 and SPITFIRE5 respectively. It is apparent that
in the two stage algorithms, we have effectively reduced the
faulty circuit simulation term but could pay a small price in
having to perform two stages of good circuit logic simulation.
In addition, the \Gammapff term helps in reducing the execution time
for the asynchronous two stage algorithm.
We see from the above discussion that the asynchronous algorithms
would possibly have the lowest execution time over
Table
3. Execution Time and Speedups on SUN-SparcCenter1000E with Algorithm SPITFIRE5 on Random
Test
Random Test Set Execution Times(seconds) and Speedups
Uniprocessor 2 Processors 4 Processors 8 Processors
Circuit Time Time Speedup Time Speedup Time Speedup
s526 57.39 28.50 2.01 14.76 3.88 8.25 6.95
mult16 143.3 64.36 2.23 33.21 4.31 20.37 7.03
div16 110.6 48.95 2.25 24.76 4.46 13.38 8.26
Table
4. Execution Time and Speedups on SUN-SparcCenter1000E with Algorithm SPITFIRE5 on ATPG
Test
ATPG Test Set Execution Times(seconds) and Speedups
Uniprocessor 2 Processors 4 Processors 8 Processors
Circuit Time Time Speedup Time Speedup Time Speedup
s526 10.92 5.70 1.91 4.03 2.70 2.26 4.83
pcont2 324.4 168.27 1.93 96.15 3.37 52.56 6.17
piir8 127.4 74.10 1.72 45.1 2.82 33.6 3.79
their synchronous counterparts. There are two factors contributing
to this. The first is the N
term which corresponds
to the partitioning of the test set across processors. The second
is the \Gammapff term corresponding to the fact that a processor
now has information about the faults that the other processors
have dropped due to asynchronous communication.
Between the two asynchronous algorithms, the single stage
asynchronous algorithm may win over the two stage algorithm
simply because only one stage of good circuit logic simulation
is performed in this case. The communication cost factor
will depend on the platform being used and may have a
different impact on different parallel platforms. However, as
long as the overall communication cost is small compared to
the overall execution cost, the single stage asynchronous algorithm
should run faster than all other approaches.
6 Experimental Results
The four algorithms described in the paper were implemented
using the MPI [12] library. The implementation is
portable to any parallel platform which provides support for
the MPI communication library. Results were obtained on a
SUN-SparcCenter 1000E shared memory multiprocessor with
8 processors and 512 MB of memory. Results are provided for
8 circuits, viz., s5378, s526, s1423, am2910, pcont2, piir8o,
mult16, and div16. The circuits were chosen because the test
sets available for them were reasonably large. The mult16
circuit is a 16-bit two's complement multiplier; div16 is a
16-bit divider; am2910 is a 12-bit microprogram sequencer;
pcont2 is an 8-bit parallel controller used in DSP applica-
tions; and piir8 is an 8-point infinite impulse response filter.
s5378, s526, and s1423 are circuits taken from the ISCAS89
benchmark suite. Parallel fault simulation was done with a
random test set of size 10,000 (i.e. a sequence of 10,000 randomly
generated input test vectors) and with actual test sets
obtained from an Automatic Test Pattern Generation (ATPG)
tool [13]. This shows the performance of the parallel fault
simulator both in a random test pattern environment and in a
grading environment. Table 1 shows the characteristics
of the circuits used, and the timings on a single processor for
both types of input test sets. The number of faults detected are
also shown.
Table
2 shows the execution times in seconds on 8 proces-
sors, on the SUNSparcCenter 1000E, obtained using all four
algorithms discussed in the previous section. SPF0, SPF1,
SPF4 and SPF5 refer to the algorithms SPITFIRE0, SPIT-
FIRE1, SPITFIRE4, and SPITFIRE5, respectively. The
same number of faults are detected by all algorithms. How-
ever, there may be a pessimism is the number of fault detected
which can be eliminated by using the pipelined approach in
the algorithm SPITFIRE3 [10].
A value of 5 was used for MinFaultLimit in SPF4 and SPF5,
MinFaultLimit
Execution Times(secs)
s526 2.43 2.26 2.33
div16 12.88 13.20 13.32
Table
5. Execution Times with SPITFIRE5 on 8 processors
for varying MinFaultLimit
as will be explained in the next paragraph. It can be seen from
the table that the execution times get progressively smaller, in
general, as we proceed from SPF0 to SPF1 to SPF4 to SPF5.
Sometimes SPF0 is better than SPF1, and sometimes SPF1 is
better than SPF4. The lowest execution time is shown in bold.
It can be seen that SPF5 always has the lowest execution time
for all algorithms. This shows that the algorithm SPITFIRE5
provides the best performance. We thus see that performing
the single stage of fault simulation and simultaneously
allowing asynchronous communication between processors
provides substantial savings in terms of execution time.
Tables
3 and 4 show the execution times and speedups obtained
with the random test sets and the ATPG test sets, respectively
on 1, 2, 4, and 8 processors for the algorithm SPIT-
FIRE5. It can be seen that the algorithm is highly scalable
and provides excellent speedups. The combined effect of test
set partitioning coupled with asynchronous communication of
detected faults has resulted in superlinear speedups in some
cases. The results from an experiment to determine the value
of MinFaultLimit are shown in Table 5. The experiment was
performed with 3 values of MinFaultLimit, viz., 2, 5, and 10,
using the actual test sets obtained from a test generator. It was
observed that the best performance was obtained with a value
of 5 for most circuits on 8 processors.
7 Conclusion
Parallel fault simulation has been a difficult problem due
to the limited scalability and parallelism that previous algorithms
could extract. Parallelization in fault simulation is
limited by the serial logic simulation of the fault-free ma-
chine. By partitioning the test sets across processors, we have
achieved a scalable parallel implementation and have thus
avoided the serial logic simulation bottleneck. By performing
asynchronous communication, we have dynamically reduced
the load on all processors, and the redundant computation
that may have otherwise occurred. We have thus presented
two asynchronous algorithms for parallel test set partitioned
fault simulation. Both asynchronous algorithms provide
better performance than their synchronous counterparts
on a shared memory multiprocessor. The single stage asynchronous
test set partitioned parallel algorithm was shown to
provide better performance than the two stage test set partitioned
asynchronous parallel algorithm, although the two
stage synchronous algorithm was better than the single stage
synchronous algorithm.
--R
"Is there hope for linear time fault simulation,"
Parallel Algorithms for VLSI Computer-Aided De- sign
"Parallel test generation for sequential circuits on general-purpose multiprocessors,"
"Fault simulation in a pipelined multiprocessor system,"
"Concurrent fault simulation of logic gates and memory blocks on message passing multicomput- ers,"
"A parallel algorithm for fault simulation based on PROOFS,"
"ZAMBEZI: A parallel pattern parallel fault sequential circuit fault simulator,"
"Overcoming the serial logic simulation bottleneck in parallel fault simulation,"
"Zamlog: A parallel algorithm for fault simulation based on Zambezi,"
"SPITFIRE: Scalable Parallel Algorithms for Test Partitioned Fault Simulation,"
"Distributed fault simulation with vector set partitioning,"
Portable Parallel Programming with the Message Passing Interface.
"Automatic test generation using genetically-engineered distinguishing se- quences,"
--TR
Parallel test generation for sequential circuits on general-purpose multiprocessors
Concurrent fault simulation of logic gates and memory blocks on message passing multicomputers
Parallel algorithms for VLSI computer-aided design
Using MPI
<italic>Zamlog</italic>
A parallel algorithm for fault simulation based on PROOFS
Overcoming the Serial Logic Simulation Bottleneck in Parallel Fault Simulation
Automatic test generation using genetically-engineered distinguishing sequences
ZAMBEZI
--CTR
Victor Kim , Prithviraj Banerjee, Parallel algorithms for power estimation, Proceedings of the 35th annual conference on Design automation, p.672-677, June 15-19, 1998, San Francisco, California, United States | circuit analysis computing;synchronous two stage approach;test set partitioned fault simulation;MPI;dynamic characteristics;shared memory multiprocessor;redundant work;sequential VLSI circuits;circuit CAD;software portability;Message Passing Interface;asynchronous parallel algorithms |
268906 | Optimistic distributed simulation based on transitive dependency tracking. | In traditional optimistic distributed simulation protocols, a logical process (LP) receiving a straggler rolls back and sends out anti-messages. The receiver of an anti-message may also roll back and send out more anti-messages. So a single straggler may result in a large number of anti-messages and multiple rollbacks of some LPs. In the authors' protocol, an LP receiving a straggler broadcasts its rollback. On receiving this announcement, other LPs may roll back but they do not announce their rollbacks. So each LP rolls back at most once in response to each straggler. Anti-messages are not used. This eliminates the need for output queues and results in simple memory management. It also eliminates the problem of cascading rollbacks and echoing, and results in faster simulation. All this is achieved by a scheme for maintaining transitive dependency information. The cost incurred includes the tagging of each message with extra dependency information and the increased processing time upon receiving a message. They also present the similarities between the two areas of distributed simulation and distributed recovery. They show how the solutions for one area can be applied to the other area. | Introduction
We modify the time warp algorithm to quickly stop
the spread of erroneous computation. Our scheme
does not require output queues and anti-messages.
This results in less memory overhead and simple memory
management algorithms. It also eliminates the
problem of cascading rollbacks and echoing [15], resulting
in faster simulation. We use aggressive cancellation
Our protocol is an adaptation of a similar protocol
for the problem of distributed recovery [4, 21]. We
supported in part by the NSF Grants CCR-9520540 and
ECS-9414780, a TRW faculty assistantship award, a General
Motors Fellowship, and an IBM grant.
illustrate the main concept behind this scheme with
the help of Figure 1. In the figure, horizontal arrows
show the direction of the simulation time. Messages
are shown by the inter-process directed arrows. Circles
represent states. State transition is caused by acting
on the message associated with the incoming arrow.
For example, the state transition of P1 from s10 to
happened when P1 acted on m0. In the time
warp scheme, when a logical process (LP) P2 receives
a straggler (i.e., a message which schedules an event in
back the state s20 and sends an anti-message
corresponding to message m2. On receiving
this anti-message, P1 rolls back state s10 and sends
an anti-message corresponding to m1. It then acts on
the next message in its message queue, which happens
to be m0. On receiving the anti-message for m1, P0
rolls back s00 and sends an anti-message for m0. On
receiving this anti-message, P1 rolls back s11.
In our scheme, transitive dependency information is
maintained with all states and messages. After rolling
back s20 due to a straggler, P2 broadcasts that s20
has been rolled back. On receiving this announce-
ment, P1 rolls back s10 as it finds that s10 is transitively
dependent on s20. P1 also finds that m0 is
transitively dependent on s20 and discards it. Similarly
P0 rolls back s00 on receiving the broadcast.
We see that P1 was able to discard m0 faster compared
to the previous scheme. Even P0 would likely
receive the broadcast faster than receiving the anti-message
for m1 as that can be sent only after P1 has
rolled back s10. Therefore, simulation should proceed
faster. As explained later, we use incarnation number
to distinguish between two states with the same
timestamp, one of which is committed and the other
is rolled back.
We only need the LP that receives a straggler to
broadcast the timestamp of the straggler. Every other
LP can determine whether they need to roll back or
not by comparing their local dependency information
with the broadcast timestamp. Other LPs that roll
back in response to a rollback announcement do not
send any announcement or anti-messages. Hence, each
rolls back at most once in response to a strag-
gler, and the problem of multiple rollbacks is avoided.
Several schemes have been proposed to minimize the
Figure
1: A Distributed Simulation.
spread of erroneous computations. A survey of these
schemes can be found in [7]. The Filter protocol by
Prakash and Subramanian [17] is most closely related
to our work. It maintain a list of assumptions with
each message, which describe the class of straggler
events that could cause this message to be canceled.
It maintains one assumption per channel, whereas our
protocol can be viewed as maintaining one assumption
per LP. In the worst case, Filter tags each message
with O(n 2 ) integers whereas our protocol tags O(n)
integers, where n is the number of LPs in the sys-
tem. Since for some applications even O(n)-tagging
may not be acceptable, we also describe techniques to
further reduce this overhead. If a subset of LPs interact
mostly with each other, then, for most of the time,
the tag size of their messages will be bounded by the
size of the subset.
The paper is organized as follows. Section 2 describes
the basic model of simulation; Section 3 introduces
the happen before relation between states
and the simulation vector which serves as the basis of
our optimistic simulation protocol; Section 4 describes
the protocol and gives a correctness proof; Section 5
presents optimization techniques to reduce the overhead
of the protocol; Section 6 compares distributed
simulation with distributed recovery.
2 Model of Simulation
We consider event-driven optimistic simulation.
The execution of an LP consists of a sequence of states
where each state transition is caused by the execution
of an event. If there are multiple events scheduled
at the same time, it can execute those events in an
arbitrary order. In addition to causing a state transi-
tion, executing an event may also schedule new events
for other LPs (or the local LP) by sending messages.
When LP P1 acts on a message from P 2, P1 becomes
dependent on P 2. This dependency relation is transitive
The arrival of a straggler causes an LP to roll back.
A state that is rolled back, or is transitively dependent
on a rolled back state is called an orphan state. A
message sent from an orphan state is called an orphan
message. For correctness of a simulation, all orphan
states must be rolled back and all orphan messages
must be discarded.
An example of a distributed simulation is shown
in
Figure
2. Numbers shown in parentheses are either
the virtual times of states or the virtual times of
scheduled events carried by messages. Solid lines indicate
useful computations, while dashed lines indicate
rolled back computations. In Figure 2(a), s00 schedules
an event for P1 at time 5 by sending message
m0. P1 optimistically executes this event, resulting
in a state transition from s10 to s11, and schedules an
event for P2 at time 7 by sending message m1. Then
receives message m2 which schedules an event at
time 2 and is detected as a straggler. Execution after
the arrival of this straggler is shown in Figure 2(b).
P1 rolls back, restores s10, takes actions needed for
maintaining the correctness of the simulation (to be
described later) and restarts from state r10. Then it
broadcasts a rollback announcement (shown by dotted
arrows), acts on m2, and then acts on m0. Upon
receiving the rollback announcement from P 1, P2 realizes
that it is dependent on a rolled back state and
so it also rolls back, restores state s20, takes actions
needed, and restarts from state r20. Finally, the orphan
message m1 is discarded by P 2.
3 Dependency Tracking
From here on, i,j refer to LP numbers; k refers to
incarnation refer to states; P i refers
to logical process refers to the number associated
with the LP to which s belongs, that is,
refers to a message and e refers to an event.
3.1 Happen Before Relation
Lamport defined the happen before(!) relation between
events in a rollback-free distributed computation
[12]. To take rollbacks into account, we extend
this relation. As in [4, 21], we define it for the states.
For any two states s and u, s ! u is the transitive
closure of the relation defined by the following three
conditions:
1. immediately precedes u.
2. s is the state restored after a roll-back
and u is the state after P u:p has taken the
s12 (2) s13 (5)
m2 (2)
(a) (b)
Figure
2: Using Simulation Vector for Distributed Simulation. (a) Pre-straggler computation. (b) Post-straggler
computation.
actions needed to maintain the correctness of simulation
despite the rollbacks. For example, in Figure
3. s is the sender of a message m and u is the re-
ceiver's state after the event scheduled by m is
executed.
For example, in Figure 2(a), s10 ! s11 and s00 !
s21, and in Figure 2(b) s11 6! r10. Saying s happened
before u is equivalent to saying that u is transitively
dependent on s.
For our protocol, "actions needed to maintain the
correctness of simulation" include broadcasting a roll-back
announcement and incrementing the incarnation
number. For other protocols, the actions may be dif-
ferent. For example, in time warp, these actions include
the sending of anti-messages. Our definition of
happen before is independent of such actions. The
terms "rollback announcements" and "tokens" will be
used interchangeably. Tokens do not contribute to the
happen before relation. So if u receives a token from
s, u does not become transitively dependent on s due
to this token.
3.2 Simulation Vector
A vector clock is a vector of size n where n is the
number of processes in the system [16]. Each vector
entry is a timestamp that usually counts the number
of send and receive events of a process. In the
context of distributed simulation, we modify and extend
the notion of vector clock, and define a Simulation
Vector (SV) as follows. To maintain dependency
in the presence of rollbacks, we extend each entry to
contain both a timestamp and an incarnation number
[19]. The timestamp in the i th entry of the SV of
corresponds to the virtual time of P i . The timestamp
in the j th entry corresponds to the virtual time
of the latest state of P j on which P i depends. The
incarnation number in the i th entry is equal to the
number of times P i has rolled back. The incarnation
number in the j th entry is equal to the highest incarnation
number of P j on which P i depends. Let entry
en be a tuple (incarnation v, timestamp t). We define
a lexicographical ordering between entries as follows:
en
Simulation vectors are used to maintain transitive
dependency information. Suppose P i schedules an
event e for P j at time t by sending a message m. P i
attaches its current SV to m. By "virtual time of m",
we mean the scheduled time of the event e. If m is
neither an orphan nor a straggler, it is kept in the in-coming
queue by P j . When the event corresponding
to m is executed, P j updates its SV with m's SV by
taking the componentwise lexicographical maximum.
its virtual time (denoted by the j th
timestamp in its SV) to the virtual time of m. A formal
description of the SV protocol is given in Figure
3. Examples of SV are shown in Figure 2 where the
SV of each state is shown in the box near it.
The SV has properties similar to a vector clock.
It can be used to detect the transitive dependencies
between states. The following theorem shows the relationship
between virtual time and SV.
Theorem 1 The timestamp in the i th entry of P i 's
SV corresponds to the virtual time of P i .
/* incarnation, timestamp */
var sv: array [0.n-1] of entry
time at which m should be executed ;
send (m:data, m:ts, m:sv) ;
Execute message (m:data, m:ts, m:sv) :
event scheduled by m */
Figure
3: Formal description of the Simulation Vector
protocol
Proof. By Induction. The above claim is true for
the initial state of P i . While executing a message, the
virtual time of the P i is correctly set. After a rollback,
virtual time of the restored state remains unchanged.
Let s:sv denote the SV of P s:p in state s. We define
the ordering between two SV's c and d as follows.
In P i 's SV, the j th timestamp denotes the maximum
virtual time of P j on which P i depends. This timestamp
should not be greater than P i 's own virtual time.
Lemma 1 formalizes the above notion.
Lemma 1 The timestamp in the i th entry of the SV
of a state of P i has the highest value among all the
timestamps in this SV.
Proof. By induction. The lemma is true for the initial
state of P i . Assume that state s of P j sent a message
m to P i . State u of P i executed m, resulting in
state w. By induction hypothesis, s:sv[j]:ts and the
u:sv[i]:ts are the highest timestamps in their SV's. So
the maximum of these two timestamps is greater than
all the timestamps in w:sv after the max operation in
Execute message. Now m:ts, the virtual time of message
m, is not less than the virtual time of the state s
sending the message. It is also not less than the virtual
time of the state u acting on the message, otherwise,
it would have caused a rollback. So by theorem 1,
m:ts is not less than the maximum of s:sv[j]:ts and
the u:sv[i]:ts. Hence setting the w:sv[i]:ts to m:ts preserves
the above property. All other routines do not
change the timestamps.
The following two lemmas give the relationship between
the SV and the happen before relation.
happens before u, then s:sv is less than
or equal to u:sv.
Proof. By induction. Consider any two states s and u
such that s happens before u by applying one of the
three rules in the definition of happen before. In case
of rule 1, state s is changed to state u by acting on
a message m. The update of the SV by taking the
maximum in the routine Execute message maintains
the above property. Now consider the next action in
which u:sv[u:p]:ts is set to m:ts. Since virtual time
of m cannot be less than the virtual time of state s
executing it, this operation also maintains the above
property.
In case of rule 2, in routine Rollback, the update of
the SV by incrementing the incarnation number preserves
the above property. The case of rule 3 is similar
to that of the rule 1. Let state w change to state u by
acting on the message m sent by state s. By lemma 1,
in m's SV, s:p th timestamp is not less than the u:p th
timestamp. Also the virtual time of m is not less than
the s:p th timestamp in its SV. Hence setting the i th
timestamp to the virtual time of m, after taking max,
preserves the above property.
The following lemma shows that LPs acquire timestamps
by becoming dependent on other LPs. This
property is later used to detect orphans. This lemma
states that if j th timestamp in state w's SV is not minus
one (an impossible virtual time) then w must be
dependent on a state u of P j , where the virtual time
of u is w:sv[j]:ts .
Proof. By induction. Initialize trivially satisfies the
above property. In Execute message, let x be the state
that sends m and let state s change to state w by
acting on m. By induction hypothesis, x and s satisfy
the lemma.
In taking maximum, let the j th entry from x is se-
lected. If j is x:p then x itself plays the role of u.
Else, by induction hypothesis, (x:sv[j]:ts
Hence either
w:sv[j]:ts is -1 or by transitivity, u happens before w.
The same argument also applies to the case where the
th entry comes from s.
In case of Rollback, let s be the state restored and
let w be the state resulting from s by taking the actions
needed for the correct simulation. By induction
hypothesis, s satisfies the lemma. Now s:sv and w:sv
differ only in w:p th entry and all states that happened
before s also happened before w. Hence w satisfies the
lemma.
simulation vector */
of set of entry;
incarnation end table */
announcement */
Receive
else if m:ts ! sv[i]:ts then
/* m is a straggler */
receives its own broadcast and rolls back. */
Block till all LPs acknowledge broadcast ;
Execute message :
messages with the lowest value of m:ts ;
Act on the event scheduled by m ;
Receive token(v; t) from
then Rollback(j; (v;
Save the iet ;
Restore the latest state s such that
Discard the states that follow s ;
Restore the saved iet ; sv[i]:inc ++ ;
Figure
4: Our protocol for distributed simulation
4 The Protocol
Our protocol for distributed simulation is shown
in
Figure
4. To keep the presentation and correctness
proof clear, optimization techniques for reducing
overhead are not included in this protocol. They are
described in the next section. Besides a simulation
vector, each LP P i also maintains an incarnation end
table (iet). The j th component of iet is a set of entries
of the form (k; ts), where ts is the timestamp of the
straggler that caused the rollback of the k th incarnation
of P j . All states of the k th incarnation of P j with
timestamp greater than ts have been rolled back. The
iet allows an LP to detect orphan messages.
When P i is ready for the next event, it acts on the
message with the lowest virtual time. As explained in
Section 3, P i updates its SV and the internal state, and
possibly schedules events for itself and for the other
LPs by sending messages.
Upon receiving a message m, P
an orphan. This is the case when, for some j, P i 's iet
and the j th entry of m's SV indicate that m is dependent
on a rolled back state of P j . If P i detects that m
is a straggler with virtual time t, it broadcasts a token
containing t and its current incarnation number k. It
rolls back all states with virtual time greater than t
and increments its incarnation number, as shown in
Rollback. Thus, the token basically indicates that all
states of incarnation k with virtual time greater than t
are orphans. States dependent on any of these orphan
states are also orphans.
When an LP receives a token containing virtual
time t from P j , it rolls back all states with the j th
timestamp greater than t, discards all orphan messages
in its input queue, and increments its incarnation
number. It does not broadcast a token, which
is an important property of our protocol. This works
because transitive dependencies are maintained. Suppose
state w of P i is dependent on a rolled back state
u of P j . Then any state x dependent on w must also
be dependent on u. So x can be detected as an orphan
state when the token from P j arrives at P x:p , without
the need of an additional token from P i . The argument
for the detection of orphan messages is similar.
We require an LP to block its execution after broadcasting
a token until it receives acknowledgments from
all the other LPs. This ensures that a token for a
lower incarnation of P j reaches all LPs before they
can become dependent on any higher incarnation of
This greatly simplifies the design because, when
a dependency entry is overwritten by an entry from
a higher incarnation in the lexicographical maximum
operation, it is guaranteed that no future rollback can
occur due to the overwritten entry (as the corresponding
token must have arrived). While blocked, an LP
acknowledges the received broadcasts.
4.1 Proof of Correctness
Suppose state u of P j is rolled back due to the arrival
of a straggler. The simulation is correct if all the
states that are dependent on u are also rolled back.
The following theorem proves that our protocol correctly
implements the simulation.
Theorem 2 A state is rolled back due to either a
straggler or a token. A state is rolled back due to a
token if and only if it is dependent on a state that has
been rolled back due to a straggler.
Proof. The routine Rollback is called from two places:
Receive message and Receive token. States that are
rolled back in a call from Receive message are rolled
back due to a straggler. Suppose P j receives a strag-
gler. Let u be one of the states of P j that are rolled
back due to this straggler. In the call from routine Receive
token, any state w not satisfying condition (C1)
is rolled back. Since the virtual time of u is greater
than the virtual time of the straggler, by Lemma 2,
any state w dependent on u will not satisfy condition
(C1). In the future, no state can become dependent
on u because any message causing such dependency is
discarded: if it arrives after the token, it is discarded
by the first test in the routine Receive message; if it
arrives before the token, it is discarded by the first
test in the routine Receive token. So all orphan states
are rolled back.
From Lemma 3, for any state w not satisfying condition
(C1) and thus rolled back, there exists a state u
which is rolled back due to the straggler, and u ! w.
That means no state is unnecessarily rolled back.
5 Reducing the Overhead
For systems with a large number of LP's, the overhead
of SV and the delay due to the blocking can be
substantial. In this section, we describe several optimization
techniques for reducing the overhead and
blocking.
5.1 Reducing the blocking
For simplicity, the protocol description in Figure 4
increments the incarnation number upon a rollback
due to a token (although it does not broadcast another
token). We next argue that the protocol works
even if the incarnation number is not incremented.
This modification then allows an optimization to reduce
the blocking. We use the example in Figure 2(b)
to illustrate this modification. Suppose P 2 executes
an event and makes a state transition from r20 to s22
with virtual time 7 (not shown in the figure). If P2
does not increment its incarnation number on rolling
back due to the token from P 1, then s22 will have
(0; 7) as the 3rd entry of its SV, which is the same as
s21's 3rd entry in Figure 2(a). Now suppose the 3rd
entry of a state w of another LP P3 is (0; 7). How
does P3 decide whether w is dependent on s21 which
is rolled back or s22 which is not rolled back? The
answer is that, if w is dependent on s21, then it is
also dependent on s11. Therefore, its orphan status
will be identified by its 2nd entry, without relying on
the 3rd entry.
The above modification ensures that, for every new
incarnation, a token is broadcast and so every LP will
have an iet entry for it. This allows the following optimization
technique for reducing the blocking. Suppose
receives a straggler and broadcasts a token. Instead
of requiring P i to block until it receives all acknowl-
edgements, we allow P i to continue its execution in
the new incarnation. One problem that needs to be
solved is that dependencies on the new incarnation of
may reach an LP P j (through a chain of messages)
before the corresponding token does. If P j has a dependency
entry on any rolled back state of the old
incarnation then it should be identified as an orphan
when the token arrives. Overwriting the old entry
with the new entry via the lexicographical maximum
operation results in undetected orphans and hence incorrect
simulation. The solution is to force P j to block
for the token before acquiring any dependency on the
new incarnation. We conjecture that this blocking at
the token receiver's side would be a improvement over
the original blocking at the token sender's side if the
number of LPs (and hence acknowledgements) is large.
5.2 Reducing the size of simulation vector
The Global Virtual Time(GVT) is the virtual time
at a given point in simulation such that no state with
virtual time less than GVT will ever be rolled back. It
is the minimum of the virtual times of all LPs and all
the messages in transit at the given instant. Several
algorithms have been developed for computing GVT
[2, 20]. To reduce the size of simulation vectors, any
entry that has a timestamp less than the GVT can be
set to NULL, and NULL entries need not be transmitted
with the message. This does not affect the correctness
of simulation because: (1) the virtual time of any
message must be greater than or equal to the GVT,
and so timestamps less than the GVT are never useful
for detecting stragglers; (2) the virtual time contained
in any token must be greater than or equal to
the GVT, and so timestamps less than the GVT are
never useful for detecting orphans. Since most of the
SV entries are initialized to -1 (see Figure 3) which
must be less than the GVT, this optimization allows a
simulation to start with very small vectors, and is particularly
effective if there is high locality in message
activities.
Following [21], we can also use a K-optimistic pro-
tocol. In this scheme, an LP is allowed to act on a
message only if that will not result in more than K
non-NULL entries in its SV. Otherwise it blocks. This
ensures that an LP can be rolled back by at most K
other LPs. In this sense optimistic protocols are N -
optimistic and pessimistic protocols are 0-optimistic.
Another approach to reducing the size of simulation
vectors is to divide the LPs into clusters. Several
designs are possible. If the interaction inside a cluster
is optimistic while inter-cluster messages are sent
conservatively [18], independent SV's can be used inside
each cluster, involving only the LPs in the clus-
ter. If intra-cluster execution is sequential while inter-cluster
execution is optimistic [1], SV's can be used for
inter-cluster messages with one entry per cluster. Similarly
one can devise a scheme where inter-cluster and
intra-cluster executions are both optimistic but employ
different simulation vectors. This can be further
generalized to a hierarchy of clusters and simulation
vectors. In general, however, inter-cluster simulation
vectors introduce false dependencies [14] which may
result in unnecessary rollbacks. So there is a trade-off
between the size of simulation vectors and unnecessary
rollbacks. But it does not affect the correctness
of the simulation.
6 Distributed Simulation and Distributed
Recovery
The problem of failure recovery in distributed systems
[6] is very similar to the problem of distributed
simulation. Upon a failure, a process typically restores
its last checkpoint and starts execution from there.
However, process states that were lost upon the failure
may create orphans and cause the system state
to become inconsistent. A consistent system state is
one where the send of a message must be recorded if
its receive is recorded [6]. In pessimistic logging [6],
every message is logged before the receiver acts on it.
When a process fails, it restores its last checkpoint
and replays the logged messages in the original or-
der. This ensures that the pre-failure state is recreated
and no other process needs to be rolled back. But the
synchronization between message logging and message
processing reduces the speed of computation. In optimistic
logging [19], messages are stored in a volatile
memory buffer and logged asynchronously to the stable
storage. Since the content of volatile memory is
lost upon a failure, some of the messages are no longer
available for replay after the failure. Thus, some of the
process states are lost in the failure. States in other
processes that are dependent on these lost states then
become orphan states. Any optimistic logging protocol
must roll back all orphan states in order to bring
the system back to a consistent state.
There are many parallels between the issues in distributed
recovery and distributed simulation. A survey
of different approaches to distributed recovery can
be found in [6]. In Table 1, we list the equivalent terms
from these two domains. References are omitted for
those terms that are widely used. The equivalence is
exact in many cases, but only approximate in other
cases.
Stragglers trigger rollbacks in distributed simula-
tion, while failures trigger rollbacks in distributed re-
covery. Conservative simulation [7] ensures that the
current state will never need to roll back. Similarly,
pessimistic logging [6] ensures that the current state
is always recoverable after a failure. In other words,
although a rollback does occur, the rolled back states
can always be reconstructed.
The time warp optimistic approach [10] inspired the
seminal work on optimistic message logging [19]. The
optimistic protocol presented in this paper is based on
the optimistic recovery protocol presented in [4, 21].
In the simulation scheme by Dickens and Reynolds [5],
any results of an optimistically processed event are not
sent to other processes until they become definite [3].
In the recovery scheme by Jalote [11], any messages
originating from an unstable state interval are not sent
to other processes until the interval becomes stable [6].
Both schemes confine the loss of computation, either
due to a straggler or a failure, to the local process.
Distributed Simulation Distributed Recovery
Logical Process Recovery Unit [19]
Virtual Time State Interval Index
Sim. Vector (this paper) Trans. Dep. Vector [19]
Straggler Failure
Anti-Message Rollback Announcement
Fossil Collection [10] Garbage Collection [6]
Global Virtual Time [2] Global Recovery Line [6]
Conservative Schemes Pessimistic Schemes
Optimistic Schemes Optimistic Schemes
Causality Error Orphan Detection
Cascading Rollback [15] Domino Effect [6]
Echoing [15] Livelock [6]
Conditional Event [3] Unstable State [6]
Event [3] Stable State [6]
Table
1: Parallel terms from Distributed Simulation
and Recovery
Conservative and optimistic simulations are combined
in [1, 18] by dividing LPs into clusters and
having different schemes for inter-cluster and intra-cluster
executions. In distributed recovery, the paper
by Lowry et al. [14] describes an idea similar to the
conservative time windows in the simulation literature
Now we list some of the main differences between
the two areas. While the arrival of a straggler can
be prevented, the occurrence of a failure cannot. But
pessimistic logging can cancel the effect of a failure
through message logging and replaying. The arrival
of a straggler in optimistic simulation does not cause
any loss of information, while the occurrence of a failure
in optimistic logging causes volatile message logs
to be lost. So some recovery protocols have to deal
with "lost in-transit message" problem [6] which is not
present in distributed simulation protocols. Incoming
messages from different channels can be processed in
an arbitrary order, while event messages in distributed
simulation must be executed in the order of increasing
timestamps. Due to these differences, some of the
protocols presented in one area may not be applicable
to the other area.
Distributed recovery can potentially benefit from
the advances in distributed simulation in the areas of
memory management [13], analytical modeling to determine
checkpoint frequency [8], checkpointing mechanisms
[22], and time constrained systems [9]. Simi-
larly, research work on coordinated checkpointing, optimal
checkpoint garbage collection, and dependency
tracking [6] can potentially be applied to distributed
simulation.
--R
Clustered Time Warp and Logic Simulation.
Global Virtual Time Algorithms.
The Conditional Event Approach to Distributed Simulation.
How to Recover Efficiently and Asynchronously when Optimism Fails.
A Survey of Rollback-Recovery Protocols in Message-Passing Systems
Parallel Discrete Event Simulation.
Comparative Analysis of Periodic State Saving Techniques in Time Warp Simulators.
Time Warp Simulation in Time Constrained Systems.
Virtual Time.
Fault Tolerant Processes.
Memory Management Algorithms for Optimistic Parallel Simulation.
Optimistic Failure Recovery for Very Large Networks.
Virtual Time and Global States of Distributed Systems.
An Efficient Optimistic Distributed Simulation Scheme Based on Conditional Knowledge.
The Local Time Warp Approach to Parallel Simulation.
Optimistic Recovery in Distributed Systems.
An Algorithm for Minimally Latent Global Virtual Time.
Distributed Recovery with K-Optimistic Logging
Automatic Incremental State Saving.
--TR
Optimistic recovery in distributed systems
Virtual time
Rollback sometimes works...if filtered
Parallel discrete event simulation
An algorithm for minimally latent global virtual time
The local Time Warp approach to parallel simulation
Time Warp simulation in time constrained systems
Comparative analysis of periodic state saving techniques in time warp simulators
time warp and logic simulation
Automatic incremental state saving
Time, clocks, and the ordering of events in a distributed system
How to recover efficiently and asynchronously when optimism fails
Distributed Recovery with K-Optimistic Logging
--CTR
Reuben Pasquini , Vernon Rego, Optimistic parallel simulation over a network of workstations, Proceedings of the 31st conference on Winter simulation: Simulation---a bridge to the future, p.1610-1617, December 05-08, 1999, Phoenix, Arizona, United States
Om. P. Damani , Vijay K. Garg, Fault-tolerant distributed simulation, ACM SIGSIM Simulation Digest, v.28 n.1, p.38-45, July 1998 | message tagging;optimistic distributed simulation;transitive dependency information;transitive dependency tracking;time warp simulation;process rollback;rollback broadcasting;straggler;memory management;dependency information;distributed recovery;anti-messages;optimistic distributed simulation protocols;logical process |
268910 | Breadth-first rollback in spatially explicit simulations. | The efficiency of parallel discrete event simulations that use the optimistic protocol is strongly dependent on the overhead incurred by rollbacks. The paper introduces a novel approach to rollback processing which limits the number of events rolled back as a result of a straggler or antimessage. The method, called breadth-first rollback (BFR), is suitable for spatially explicit problems where the space is discretized and distributed among processes and simulation objects move freely in the space. BFR uses incremental state saving, allowing the recovery of causal relationships between events during rollback. These relationships are then used to determine which events need to be rolled back. The results demonstrate an almost linear speedup-a dramatic improvement over the traditional approach to rollback processing. | Introduction
One of the major challenges of Parallel Discrete
Event Simulation (PDES) is to achieve good perfor-
mance. This goal is difficult to attain, because, by its
very nature, discrete event simulation organizes events
in a priority queue based on the timestamp of events,
and processes them in that order. When porting a
simulation to a parallel platform, this priority queue
is distributed among logical processes (LPs) that correspond
to the physical processes that are being mod-
eled. Because the LPs interact with each other by
sending event messages, it is costly to maintain the
causality between events. Two basic protocols have
been developed to ensure that causality constraints are
satisfied [9]: conservative [5] and optimistic. In Time
Warp (TW) [11], the best known optimistic protocol,
causality errors are allowed to occur, but when such an
error is detected, the erroneous computation is rolled
back. The research described in this paper utilizes the
optimistic protocol and focuses on optimizing rollback
processing.
The method of rollback processing we will present is
applicable to simulations that consist of a space with
objects moving freely in it; the space is discretized into
a multi-dimensional lattice and divided among LPs.
We make use of incremental state saving techniques
[18] to detect dependencies between events. Typical
implementations of a rollback in such a setting (used in
our previous implementation [7]) is to roll back the entire
area assigned to the LP. In this paper, we present a
novel approach, termed Breadth-First Rollback (BFR),
in which the rollback is contained to the area that has
been directly affected by the straggler (event message
with a timestamp smaller than the current simulation
time) or antimessage (cancellation of an event). We
also present the improved simulation speedup and performance
resulting from the use of this approach.
The application that motivated this work is a Lyme
disease simulation in which the two-dimensional space
is discretized into a two-dimensional lattice. The most
important characteristics of the simulation are: the
mobile objects moving freely in space (mice) and the
stationary objects present at the lattice nodes (ticks).
The two main groups of events are: (i) local to a
node (such as tick bites, mouse deaths, etc.) and (ii)
non-local (such as a mouse moving from one node to
another-Move Event).
The simulation currently runs on an IBM SP2 (we
show results for up to 16 processors). The model
was designed in an object oriented fashion and implemented
in C++. The communications between processes
use the MPI [10] message passing library.
Related Work
There are two inter-related issues that have arisen
in optimizing optimistic protocols for PDES. One is
the need to reduce the overhead of rollbacks, and the
other is to limit the administrative overhead of partitioning
a problem into many "small" LPs (as happens,
for example, in digital logic simulations). To address
both of these issues, clustering of LPs is often used.
Lazy re-evaluation [9] has been been used to determine
if a straggler or antimessage had any effect on
the state of the simulation. If, after processing the
straggler or canceling an event, the state of the simulation
remains the same as before, than there is no
need to re-execute any events from the time of the
rollback to the current time. The problem with this
approach is that it is hard to compare the state vectors
in order to determine if the state has changed. It is
also not applicable to the protocols using incremental
state saving.
The Local Time Warp (LTW) [15] approach combines
two simulation protocols by using the optimistic
protocol between LPs belonging to the same cluster
and by maintaining a conservative protocol between
clusters. LTW minimizes the impact of any rollback
to the LPs in a given cluster.
Clustered Time Warp (CTW) [1, 2] takes the opposite
view. It uses conservative synchronization within
the clusters and an optimistic protocol between them.
The reason given for such a choice is that, since LPs
in a cluster share the same memory space, their tight
synchronization can be performed efficiently. Two algorithms
for rollback are presented: clustered and lo-
cal. In the first case, when a rollback reaches a cluster,
all the LPs in that cluster are rolled back. This way
the memory usage is efficient, because events that are
present in input queues and that were scheduled after
the time of the rollback, can be removed. In the local
algorithm, only the affected LPs are rolled back. Restricting
the rollback speeds up the computation, but
increases the size of memory needed, because entire
input queues have to be kept.
The Multi-Cluster Simulator [16], in which digital
circuits are modeled, takes a bit of a different look
at clustering. First, the cluster is not composed of a
set of LPs; rather, it consists of one LP composed of
a set of logical gates. These LPs (clusters) are then
assigned to a simulation process.
In the case of spatially explicit problems, the issue
of partitioning the space between LPs is also of
importance. Discretizing the space results in a multi-dimensional
lattice for which the following question
arises: Should one LP be assigned to each lattice node
(which results in high simulation overhead) or should
the lattice nodes be "clustered" and the resulting clusters
be assigned to LPs? Our first implementation of
Lyme disease used the latter approach and assigned
spatially close nodes to a single LP, with TW used between
the LPs. This was similar to the CTW, except
that our implementation did not have multiple LPs
within a cluster, to simulate space more efficiently.
Unfortunately, this approach did not perform as well
as we had hoped, especially when the problem size
grew larger, because when a rollback occurred in a
cluster, the entire cluster had to roll back.
To improve performance, the nodes of the lattice
belonging to an LP (cluster) are allowed to progress independently
in simulation time; however, all the nodes
in a cluster are under the supervision of one LP. When
a rollback occurs in a LP/cluster, only the affected
lattice nodes are rolled back, thanks to a breadth-first
rollback strategy, explained in Section 3. This approach
can be classified as an inter-cluster and intra-cluster
time warp (TW).
The main innovation in BFR is that all future information
is global, and information about the past is
distributed among the nodes of the spatial lattice. The
future information is centralized to facilitate scheduling
of events, and the past information is distributed
to limit the effects of a rollback. One could say that,
from the point of view of the future, we treat a partition
as a single LP, whereas, from the point of view
of the past, we treat the partition as a set of LPs (one
LP per lattice node). The performance of the new
method yields a speedup which is close to linear.
Breadth-First Rollback Approach
Breadth-First Rollback is designed for spatially ex-
plicit, optimistic PDES. The space is discretized and
divided among LPs, so each LP is responsible for a set
of interconnected lattice nodes. The speed of the simulation
is dictated by the efficiency of two steps: the
forward event and the rollback processing. The forward
computation is facilitated when the event queue
is global to the executing LP, so that the choice of the
next event is quick. The impact of a rollback is reduced
when the depth of the rollback is kept to a min-
imum: the rollback should not reach further into the
past than necessary, and the number of events affected
at a given time has to be minimized. For the latter,
we can rely on a property of spatially explicit prob-
lems: if two events are located sufficiently far apart
in space, one cannot affect the other (for certain values
of the current logical virtual time (lvt) of the LP
and the time of the rollback), so at most one of these
events needs to be rolled back when a causality error
occurs.
Events can be classified as local or non-local. A
local event affects only the state of one lattice node.
A non-local event, for example the Move Event, which
moves an object from one location to the next, affects
at least two nodes of the lattice. Local events are easy
to roll back. Assume that a local event e at location
Original
impact point
of a rollback
Potential 1st,2nd and 3rd waves of the rollback
location x
Figure
1: Waves of Rollback.
x and time t triggers an event e 1
at time t 1
and the
same location x (by definition of a local event). If a
rollback then occurs which impacts event e, only the
state of location x has to be restored to the time just
prior to time t. While restoring the state, e 1
will be
automatically "undone". If, however, the triggering
event e is non-local and triggers an event e 1
at location
then restoring the state of location x is not
sufficient-it is also necessary to restore the state of
location x 1
just prior to the occurrence of event e 1
Regardless of whether an event is local or non-local,
the state information can be restored on a node-by-
node basis.
To show the impact of a rollback on an LP, consider
a straggler or an antimessage arriving at a location x,
marked in the darkest shade in Figure 1. The roll-back
will proceed as follows. The events at x will be
rolled back to time t r
, the time of the straggler or
antimessage. Since incremental state saving is used,
events have to be undone in decreasing time order to
enable the recovery of state information. The rollback
involves undoing events that happened at x. Each
event e processed at that node will be examined to
determine if e caused another event (let's call it e 1 ) to
occur at a different location x 1 6=
x (non-local event).
In such a case, location x 1 has to be rolled back to
the time prior to the occurrence of e 1 . Only then is
e undone (this breath-first wave gave the name to the
new approach).
In our simulation, objects can move only from one
lattice node to a neighboring one, so that a rollback
can spread from one site only to its neighbors. The
time of the rollback at the new site must be strictly
greater than the one at site x, because there is a non-zero
delay between causally-dependent events. In gen-
eral, the breadth of the rollback is bounded by the
speed with which simulated objects move around in
space.
Figure
shows potential waves of a rollback, from
the initial impact point through three more layers of
processing. In practice, the size of the affected area
is usually smaller than the shaded area in Figure 1,
because events at one site will most likely not affect
all their neighboring nodes. Obviously, if an event
at location x triggered events on a neighboring LP,
antimessages have to be sent.
It is interesting to note that each location belonging
to a given LP can be at a different logical time. In
fact, we do not necessarily process events in a given
LP in an increasing-timestamp order. If two events are
independent, an event with a higher timestamp can be
processed ahead of an event with a lower timestamp.
A similar type of processing was mentioned briefly in
[17] as CO-OP (Conservative-Optimistic) processing.
The justification is that the requirement of processing
events in timestamp order is not necessary for provably
correct simulations. It is only required that the
events for each simulation object be processed in a
correct time order.
Due to this type of processing, when we process an
event (in the forward execution), we have to check the
logical time of the node where the event is scheduled.
If the logical time is greater than the time of the event,
the node has to roll back.
4 Comparison With The Traditional
Approach
To demonstrate improvements in performance, we
present below the model used in our initial simula-
tion, which did not use the BFR method. The space,
as previously mentioned, is discretized into a two-dimensional
lattice. Similar discretization is used,
for example, in personal communication services [4],
where the space is discretized by representing the net-work
as hexagonal or square cells. In these simula-
tions, each cell is modeled by an LP. In our research,
we have developed a simulation system for spatially
explicit problems. The particular application we describe
in this paper is the simulation of the spread of
the Lyme disease.
In Lyme disease simulation, it would be prohibitively
expensive to assign one LP to each lattice
node, so we "cluster" lattice nodes into a single LP.
Currently, the space is divided strip-wise among the
available processors. Of course, other spatial decompositions
can be used. To achieve better performance,
the space can also be divided into more LPs than there
are available processors [8].
The LPs in this simulation are called Space Man-
agers, because they are responsible for all the events
that happen in a given region of space. If the Space
speedup
processors
Figure
2: Speedup For Small Data Set (about 2,400
nodes).
Manager determines that an object moves out of local
space to another partition, the object and all its future
events are sent to the appropriate Space Manager. As
previously mentioned, the optimistic approach is used
to allow concurrent processing of events happening at
the same time at different locations.
Because the state information is large, we use incremental
state saving of information necessary for
rollback. When an event is processed, the state information
that it changes is placed into its local data
structure. The event is then placed on a processed
event list. Events that move an object from one LP to
another are also placed in a message list (only pointers
to the events are actually placed on the lists; the
resulting duplication is not costly and speeds up sending
of antimessages). If an object moves to another
LP, the sending LP saves the object and the corresponding
events in a ghost list to be able to restore
this information upon rollback.
When a rollback occurs, messages on the message
list are removed and corresponding antimessages are
sent out (we use aggressive cancellation). Then, the
events from the processed event list are removed and
undone. Undoing an event which involved sending an
object to another process entails restoring the objects
from the ghost list and restoring future events of the
object to the event queue. For other events, the parts
of the state that have been changed by the events have
to be restored. During fossil collection, the obsolete
information is removed and discarded from the three
lists: the processed event list, the message list, and the
ghost list.
Initial results obtained for a small-size simulation
were encouraging (Figure 2); however, the
speedup
number of processors
Figure
3: Speedup For Large Data Set (about 32,000
run
time
in
seconds
number of processors
Run Time with Multiple Logical Processes
Figure
4: Running Time for Large Data Set and Multiple
LPs per Processor.1.52.53.52 4
speedup
number of processors
Figure
5: Speedup with Large Data set and 16LPs.
was not impressive for larger simulations (Figure 3).
The performance degradation is caused by the large
space allocation to individual processes resulting from
the increased problem size. When a rollback occurs,
the entire space allocated to an LP is rolled back. To
minimize the impact of the rollback, we divided the
space into more LPs, while keeping the same number
of processors. Figure 4 shows the runtime improvement
achieved with this approach. For the given
problem size, the ultimate number of LPs was
ure 5), and the best efficiency was achieved with 8
processors.
5 Challenges Of The New Approach
In order to implement BFR, some changes had to
be made not only to the simulation engine, but also
to the model. A major change was made to the Move
Event. The question arose: If an object is moving
from location (x; y) to location
should
the object be placed as "processed"? If it is placed
in location (x; y), and location
) is rolled back,
there would be no way of finding out that the event
affected location
If it is placed at location
location (x; y) is rolled back, a similar
difficulty arises. Placing the Move Event in both processed
lists is also not a good solution, because, in
one case, the object is moving out of the location,
and, in the other case, it is moving into the location.
This dilemma motivated us to split the Move event
into two: the MoveOut and MoveIn Events. Hence,
when an object moves from location (x; y) to location
), the MoveOut is placed in the processed event
list at (x; y) and the MoveIn at location (x 1
). The
only exception is when location belongs to another
LP. In that case, the MoveIn is placed in the
processed event list at location (x; y) (it will be placed
on top of the corresponding MoveOut event), to indicate
that a message was sent out.
Upon rollback, if a MoveIn to another LP is en-
countered, an antimessage is sent. The result of such a
treatment of antimessages, coupled with the breadth-first
processing of rollbacks, gives us an effect of lazy
cancellation [12]. An antimessage is sent together with
a location (x; y) to which the original message was ad-
dressed, to avoid searching the lattice nodes for this
information.
Since the MoveIn Event indicates when a message
has been sent, no message list is necessary. Another
affected structure is the ghost list. In the original ap-
proach, objects and their events were placed on the
list in the order that objects left the partition. Now
the time order is not preserved, objects are placed on
the list in any timestamp order, because the nodes of
the lattice can be at different times. The non-ordered
aspect of the ghost list poses problems during fossil
collection. The list cannot merely be truncated to remove
obsolete objects. The solution, again, is to distribute
that list among the nodes. This is useful for
load balancing, as described in the final section. How-
ever, the ghost list is relatively small (compared to the
processed event list), so it might not be necessary to
distribute the list if no load balancing is performed. It
is sufficient to maintain an order in the list based on
the virtual time at which the object is removed from
the simulation.
Additionally, event triggering information must be
preserved. In the original implementation, when an
event was created, the identity of the event that caused
it was saved in one of the tags (the trigger) of the new
event. When an event was undone, the dependent future
events were removed by their trigger tags from
the event queue. In BFR, it is possible that the future
event is already processed, and its assigned location
has not been rolled back yet. It is prohibitively expensive
to traverse the future event list and then each
processed event list in the neighborhood in search of
the events whose triggers match the given event tag.
The solution is to create dependency pointers from the
trigger event to the newly created events. This way,
a dependent event is easily accessed, and the location
where it resides can be rolled back. Pointer tacking
has been previously implemented for shared memory
[9] to decide whether an event should be canceled or
not. In our approach, we also need to know if a dependent
event has been processed or not, in order to
be able to quickly locate it either in the event queue
or in a processed event list.
One more change was required for the random number
generation. In the original simulation, a single
random number stream was used for an LP. These
numbers are used, for example, in calculating the time
of occurrence of new events. Now, since the sequence
of events executed on a single LP can differ from run
to run, the same random number sequence can yield
two different results! Obviously, result repeatability is
important, so we chose to distribute the random number
sequence among the nodes of the lattice. Initially,
a single random number sequence is used to seed the
sequences at each node. From there, each node generates
a new sequence.
6 Examples
To demonstrate the behavior of the BFR algorithm,
let's consider the example in Figure 6. The figure
shows processed lists at three different lattice nodes:
(0,0),(0,1), and (0,2). The event MO is a MoveOut
MI
MI
could be any event
MO- MoveOut
MI- MoveIn event
causality
relation
Most Recent
Past
MI
MO
MO
MI
relation
Past
Figure
View of Processed Lists at Three Nodes of
the Lattice.
event, MI a MoveIn event, and X can be any local
event.
If we have a rollback for location (0,1) at time T 0
the following will happen: First MI 3
is undone and
placed on the event queue. The same is done to X 2
When MO 2
is being considered, the dependence between
it and MI 4
is detected, and a rollback for location
(0,2) and time T 2
is performed. As a result, X 3
is undone and MI 4
is undone. Both are placed on the
event queue. Next MO 2
is undone, which causes MI 4
to be removed from the event queue. MO 1
is exam-
ined, and (0,0) is rolled back to time T 1
. MI 2
and
are undone and placed on the event queue. MO 1
is undone and MI 1
is removed from the event queue.
If the rollback occurred at location (0,0) for time
then the three most recent events at location (0,0)
will be undone and placed on the event queue, and
no other location will be affected during the rollback.
It is possible that the other locations will be affected
when the simulation progresses forward. If, for exam-
ple, an event MO z
was scheduled for time T 2
on (0,0)
and triggered an event MI z
on (0,1) for time T 3
, then
location (0,1) would have to roll back to time T 3
Interesting aside: We can have location (x,y) at
simulation time t. The next event in the future list
is scheduled for time t 1
and location
cessed. If an event comes in from another process for
we do not necessarily incur a
rollback. If the event is to occur at location (x,y), then
no rollback will happen. If, however, it is destined for
location localized rollback will occur. As a
result, comparing the timestamp of an incoming event
uninfected
larval tick
infected
nymphal tick
unifected
mouse
mouse
infection
tick bite
infection
tick bite
infected
Figure
7: The Cycle of Lyme Disease
to the local virtual time is not enough to determine if
a rollback is necessary.
7 Application Description
Before we present the results obtained with BFR,
it is important to sketch our application-the simulation
of Lyme disease. This disease is prevalent in
the Northeastern United States [3, 13]. People can
acquire the disease by coming in contact with a tick
infected with the spirochete, which may transfer into
the human's blood, causing an infection. Since the
ticks are practically immobile, the spread of the disease
is driven by the ticks' mobile hosts, such as mice
and deer. Even though the most visible cases of Lyme
disease involve humans, the main infection cycle consists
of ticks and mice (Fig. 7). If an infected tick
bites a mouse, the animal becomes infected. The disease
can also be transmitted from an infected mouse
to an uninfected, feeding tick.
The seasonal cycle of the disease, and the duration
of the simulation, is 180 days, starting in the late
spring[6]. This time is the most active for the ticks and
mice. Mice, during that time, are looking for nesting
sites and may carry ticks a considerable distance [14].
The mice are modeled as individuals, and ticks,
because of their sheer number (as many as 1200
larvae/400m 2 [14]) are treated as "background". The
space is discretized into nodes of size 20x20m 2 , which
represent the size of the home range for a mouse. Each
node may contain any number of ticks in various stages
of development and various infection status. Mice can
move around in space in search of empty nesting sites.
The initiation of such a search is described by the Disperse
Event, and the moves by the Move Event. Mice
can die (Kill Event) if they cannot find a nesting site
or by other natural causes, such as old age, attacks by
run
time
in
sec
number of processors
Comparison of Run Times Between Approaches
breadth-first
old approach
Figure
8: Results: Comparison of Runs With BFR
and the Traditional Approach.
predators, and disease. Mice can be bitten by ticks
(Tick Bite) or have ticks drop off (Tick Drop). From
the above list of events, only the Move Event is non-local
Figure
8 shows the performance of BFR and illustrates
almost linear speedup. The running time of the
BFR is considerably shorter than that of the traditional
approach. Looking at the new algorithm, we
observe several benefits. The most important benefit
is that, when a rollback occurs, we do not need to roll
back all the events belonging to a given LP. Only the
necessary events are undone. In the traditional ap-
proach, the number of events that needed to be rolled
back was ultimately proportional to the number of
lattice nodes assigned to a given LP. When a rollback
occurred, all the events that happened in that space
had to be undone. On the other hand, when a rollback
occurs in the BFR version, the number of events being
affected by a rollback is proportional to the length of
the edges of the space that interface with other LPs.
In the case of the space divided into strips, the number
of events affected by a given rollback is proportional
to the length of the two communicating edges. There-
fore, when the size of the space assigned to a given LP
increases (when the number of LPs for a given problem
size decreases), the number of events affected by
a rollback in the case of BFR remains roughly con-
stant. In the traditional approach, that number increases
proportionally to the increased length of the
non-communicating edges. Consequently, we observe
that the number of events rolled back using BFR is an
order of magnitude smaller than that in the traditional
speedup
number of processors
Comparison of Speedup in Balanced and Unbalanced Computations
balanced load
uneven load
Figure
9: Speedup for Balanced and Unbalanced Computations
We also get fewer antimessages being sent as a result
of the automatic lazy cancellation. In general,
having one LP per processor eliminates on-processor
communication delays. There are, of course, some
drawbacks to the new method. Fossil collection is
much more expensive (because lists are distributed);
therefore, it is done only when the Global Virtual
Time has increased by a certain amount from the last
fossil collection. It is harder to maintain dependency
pointers than triggers, because, when an event is un-
done, its pointers have to be reset. The pointers have
to be maintained when events are created, deleted,
and undone, whereas triggers are set only once. There
must be code to deal with multiple dependents. There
is no aggressive cancellation, but, as can be seen from
the results, that does not seem to have an adverse
impact on performance.
9 Conclusions and Future Work
We have described a new algorithm for rollback
processing in spatially explicit problems. The algorithm
is based on the optimistic protocol and relies on
the space being partitioned into a multi-dimensional
lattice. Rollbacks are minimized by examining the
processed event list of each lattice node during roll-
back, in search of causal dependencies between events
which span the lattice nodes. The rollback impacts
the minimum number of sites, making the simulation
very efficient. As a result, an almost linear speedup
is achieved. Obviously this performance is attainable
thanks to a large amount of parallelism existing in the
application.
Up to now, we did not address the issue of load
distribution. If the simulation's load per LP is uneven
(for example, when the odd LPs have more load then
the even ones), the performance degrades, as shown
in
Figure
9. Another advantage of BFR is that it
lends itself well to load balancing, since the local (at
the node level) history tracking facilitates load balanc-
ing. An overloaded LP can "shed" layers of space in
order to balance the load. Nothing special needs to
happen on the receiving side. If messages were sent
to the space that just arrived, they are simply discarded
by the sender of the space and reconstructed
from the ghost list by the receiver (we assume that
load can only be exchanged between neighboring pro-
cesses). On the sending side, however, the priority
queue has to be filtered in order to extract the future
events for the area sent to the new process. In order
to decide if there is a need to migrate the load, the
event queue can be scanned to determine the event
density. Since there is a large number of events in the
queue at any given time, this quantity might prove to
be a good measure of load. If the density is too high
at some process, some of the space can be sent to the
neighboring processes.
Acknowledgments
This work was supported by the National Science
Foundation under Grants BIR-9320264 and CCR-
9527151. The content of this paper does not necessarily
reflect the position or policy of the U.S.
Government-no official endorsement should be inferred
or implied.
--R
The Dynamic Load Balancing of Clustered Time Warp for Logic Sim- ulations
time warp and logic simulation.
The biological and social phenomenon of Lyme disease.
A Case Study in Simulating PCS Networks Using Time Warp.
Distributed Simu- lation: A Case Study in Design and Verification of Distributed Programs
Parallel Discrete Event Simulation of Lyme Disease.
Continuously Monitored Global Virtual Time in Parallel Discrete Event Simulation.
Simulating Lyme Disease Using Parallel Discrete Event Simulation.
Parallel Discrete Event Simula- tion
Anthony Skjel- lum
Virtual Time.
A Study of Time Warp Rollback Mechanisms.
The epidemiology of Lyme disease in the United States 1987-1998
Temporal and Spatial Dynamics of Ixodes scapu- laris (Acari: Ixodidae) in a rural landscape
The Local Time Warp Approach to Parallel Simulation.
Dynamic Load Balancing of a Multi-Cluster Simulation of a Network of Workstations
SPEEDES: A Unified Approach to Parallel Simulation.
Incremental State Saving in SPEEDES using C
--TR
--CTR
Jing Lei Zhang , Carl Tropper, The dependence list in time warp, Proceedings of the fifteenth workshop on Parallel and distributed simulation, p.35-45, May 15-18, 2001, Lake Arrowhead, California, United States
Malolan Chetlur , Philip A. Wilsey, Causality representation and cancellation mechanism in time warp simulations, Proceedings of the fifteenth workshop on Parallel and distributed simulation, p.165-172, May 15-18, 2001, Lake Arrowhead, California, United States
Ewa Deelman , Boleslaw K. Szymanski, Dynamic load balancing in parallel discrete event simulation for spatially explicit problems, ACM SIGSIM Simulation Digest, v.28 n.1, p.46-53, July 1998
M. Rao , Philip A. Wilsey, Accelerating Spatially Explicit Simulations of Spread of Lyme Disease, Proceedings of the 38th annual Symposium on Simulation, p.251-258, April 04-06, 2005
Boleslaw K. Szymanski , Gilbert Chen, Simulation using software agents I: linking spatially explicit parallel continuous and discrete models, Proceedings of the 32nd conference on Winter simulation, December 10-13, 2000, Orlando, Florida
Christopher D. Carothers , David Bauer , Shawn Pearce, ROSS: a high-performance, low memory, modular time warp system, Proceedings of the fourteenth workshop on Parallel and distributed simulation, p.53-60, May 28-31, 2000, Bologna, Italy
A. Maniatty , Mohammed J. Zaki, Systems support for scalable data mining, ACM SIGKDD Explorations Newsletter, v.2 n.2, p.56-65, Dec. 2000 | speedup;simulation objects;spatially explicit simulations;discrete event simulation;causal relationship recovery;rollback overhead;incremental state saving;optimistic protocol;straggler;antimessage;breadth-first rollback;parallel discrete event simulations;rollback processing |
268916 | Billiards and related systems on the bulk-synchronous parallel model. | With two examples we show the suitability of the bulk-synchronous parallel (BSP) model for discrete-event simulation of homogeneous large-scale systems. This model provides a unifying approach for general purpose parallel computing which in addition to efficient and scalable computation, ensures portability across different parallel architectures. A valuable feature of this approach is a simple cost model that enables precise performance prediction of BSP algorithms. We show both theoretically and empirically that systems with uniform event occurrence among their components, such as colliding hard-spheres and ising-spin models, can be efficiently simulated in practice on current parallel computers supporting the BSP model. | Introduction
Parallel discrete-event simulation of billiards and
related systems is considered a non-obvious algorithmic
problem, and has deserved attention in the literature
[1, 5, 7, 8, 9, 11, 13, 18, 24, 23, 25]. Currently an
important class of applications for these simulations
is in computational physics [6, 7, 10, 14, 15, 20, 21]
(e.g. hard-particle fluids, ising-spin models, disk-
packing problems). However this kind of systems
viewed through the more general setting of "many
moving objects" [3, 16], are present everywhere in
real life (e.g. big cities, transport problems, navigation
systems, computer games, and combat models!).
On the other hand, these systems have been considered
sufficiently general and computationally intensive
enough to be used as a sort of benchmark for Time
Warp simulation [5, 23, 25], whereas different simulation
techniques have been shown to be more efficient
when dealing with large systems [8, 9, 11, 13, 18].
Similar to most of the parallel software development
in the last few decades, the prevalent approach to the
simulation of these systems has followed a machine
dependent exploitation of the inherent parallelism associated
with the problem. Currently, however, one of
the greatest challenges in parallel computing is: "to
establish a solid foundation to guide the rapid process
of convergence observed in the field of parallel computer
systems and to enable architecture independent
software to be developed for the emerging range of
scalable parallel systems" [17]. The bulk synchronous
parallel (BSP) model has been proposed to provide
such a foundation [22] and, for a wide range of ap-
plications, this model has already been shown to be
successful in this bridging role (i.e. a bridge between
hardware and software in direct analogy with the role
played by the von Neumman model in sequential computing
over the last fifty years). At present, the BSP
model has been implemented in different parallel architectures
shared memory multi-processors, distributed
memory systems, and networks of workstations
enabling portable, efficient and scalable parallel
software to be developed for those machines [19, 4].
A first step in the BSP implementation of conservative
and optimistic parallel simulation algorithms
has so far been given in [12]. In this paper we follow
a different approach by using conservative algorithms
designed on purely BSP concepts, and evaluating their
performance under two examples: an ising-spin model
and a hard-particle fluid. Note that these potentially
large-scale systems have the property of a very random
and even distribution of events among their constituent
elements. We believe, however, that these two
examples exhibit sufficient generality and complexity
as to be representative of a wide range of other related
asynchronous systems (e.g. some instances of
the multiple-loop networks described in [8] and the
systems there mentioned). Note that because of the
synchronous nature of the BSP model, our algorithms
are reminiscent to those proposed in [8, 18].
2 The BSP model
For a detailed description of the BSP model the
reader is referred to [22, 17]. A bulk-synchronous
parallel (BSP) computer consists of: (i) a set of
processor-memory pairs, (ii) a communication net-work
that delivers messages in a point-to-point man-
ner, and (iii) a mechanism for the efficient barrier
synchronization of all, or a subset, of the processors.
There are no specialized broadcasting or combining
facilities.
If we define a time step to be the time required for
a single local operation, i.e. a basic operation such as
addition or multiplication on locally held data values,
then the performance of any BSP computer can be
characterized by the following four parameters: (i) p
the number of processors, (ii) s the processor speed,
i.e. number of time step per second, (iii) l the synchronization
periodicity, i.e. minimal number of time steps
Network l g
Ring O(p) O(p)
2D Array O( p p) O( p p)
Butterfly O(logp) O(logp)
Hypercube O(logp) O(1)
Table
1: BSP parameters for some parallel computers.
elapsed between two successive barrier synchronizations
of the processors, (iv) g the ratio total number of
local operations performed by all processors in one second
to total number of words delivered by the communication
network in one second, i.e. g is a normalized
measure of the time steps required to send/receive an
one-word message in a situation of continuous traffic
in the communication network. See Table 1 taken
from [17] which shows bounds for the values of g and
l for different communication networks.
A BSP computer operates in the following way. A
computation consists of a sequence of parallel super-
steps, where each superstep is a sequence of steps, followed
by a barrier synchronization of processors at
which point any remote memory accesses takes effect.
During a superstep each processor has to carry out a
set of programs or threads, and it can do the following:
(i) perform a number of computation steps, from its
set of threads, on values held locally at the start of the
send and receive a number of messages
corresponding to non-local read and write requests.
The complexity of a superstep S in a BSP algorithm
is determined as follows. Let the work w be the
maximumnumber of local computation steps executed
by any processor during S. Let h s be the maximum
number of messages sent by any processor during S,
and h r be the maximum number of messages received
by any processor during S. Then the cost of S is given
by time steps (or alternatively
g). The cost of a
BSP algorithm is simply the sum of the costs of its
supersteps.
The architecture independence is achieved in the
BSP model by designing algorithm which are parameterized
not only by n, the size of the problem, and p,
the number of processors, but also by l and g. The
resulting algorithms can then efficiently implemented
on a range of BSP architectures with widely differing l
and g values. For example, on a machine with large g
we must provide an algorithm with sufficient parallel
slackness (i.e. a v processor algorithm implemented on
a p processor machine with v ? p) to ensure that for
every non-local memory access at least g operations
on local data are performed.
3 Basic BSP simulation algorithms
The kind of systems relevant to this paper (i.e.
statistically homogeneous steady state systems with
event occurrences randomly and evenly distributed
among their constituent elements) can be simulated
on a BSP computer using two-phase conservative algorithms
as follows.
On a p-processor BSP computer the whole system
is divided into p equal-sized regions that are owned
by a unique processor. Events involving elements located
on the boundaries are called border zone events
(BZ events), and are used to synchronize the parallel
operation of the processors. The most conservative
(but less efficient) version of this algorithm works doing
iterations composed of two phases: (i) the parallel
phase where each processor is simultaneously allowed
to simulate sequentially and asynchronously its own
region, and (ii) the synchronization phase where the
occurrence of one border zone event is simulated by
only one processor while the other ones remain in an
idle state. [We further improve the efficiency of this
algorithm by exploiting opportunities to simulate at
most p border zone events in parallel during the synchronization
phase.]
The synchronization phase is used to cause the barrier
synchronization of the processors in the simulated
time, and to exchange state information among
neighboring regions. [In the system examples studied
below, this state information refer to the states of
particular atoms and particles located in neighboring
regions.] During the parallel phase every processor
simulates events whose times are less than the current
global next BZ event (i.e. the BZ event with the
least time among all of the local next BZ events held
in each region or processor). Thus, global processor
synchronization is issued periodically at variable time
intervals which are driven by the chronological occurrence
of the BZ events. See pseudo-code in Figure 1.
We assume there are n elements evenly distributed
throughout the whole system, with regions made up
of
n=p) \Theta (a =
n=p) elements. The goal
is to simulate the occurrence of M events which on
average are assumed to occur randomly and evenly
distributed among the elements (i.e. M=n per ele-
ment). This goal is achieved by the BSP algorithm in
I iterations, wherein each iteration simulates a total
of NPE events in the parallel phase plus one event in
the synchronization phase, namely
We define f I =M to be the fraction of BZ events
that occur during the simulation. As we show below,
for the kind of systems we are interested in we have
leading to
which shows that by choosing regions sufficiently large
it is always possible to achieve some degree of parallelism
with this strategy. However, the actual gain
in running time due to the parallel phase, where each
processor simulates about O(a=p) events sequentially,
crucially depends on the cost of the communication
and synchronization among the processors during the
synchronization phase (this cost depends on the particular
parallel computer).
The parallel prefix operation in Figure 1 is realized
as follows. A virtual t-ary tree is constructed among
k be the processor that owns the region where
the next border zone event (NBZE) is about to take
place. This event is scheduled to occur at time T bz .
The parallel prefix operation calculates the minimum
among a set of p local NBZEs distributed in the p
processors (the minimum is stored in each processor).]
Parallel Simulation [processor i]
Initialisation;
while( not end condition )
begin-superstep
Simulate events with time less than T bz ;
Processor k reads the state of neighboring
regions;
end-superstep
Processor k simulates the occurrence of
the NBZE;
endwhile
Figure
1: Hyper-conservative simulation algorithm.
the p processors: from the leaves to the root the partial
are calculated, and then the absolute minimum
is distributed among the processors going from
the root to the leaves. The cost of this operation is
where the value of t depends on the parameters g and
l (e.g. for a small number of processors p it could be
more convenient to set
The efficiency of the algorithm in Figure 1 is improved
by attempting to simulate in parallel at most
border zone events per iteration. We explain this
procedure with an example. Let us assume a situation
with next BZ events e a
regions R a and R b respectively. In our
bg is the identifier of the element in
region R i which has scheduled the next BZ event e i
to occur at time t i . In addition, we define t
to be
the time at which an element i 0 (i
(R i
has scheduled a BZ event. We assume that
the elements are related due to the topology
of the system being simulated (e.g. neighboring atoms
in the ising-spin model described below). Note that t i 0
is not necessarily the time of the next BZ event in region
However, the simulation of both e i and e i 0
is restricted by the order relation between their respective
scheduled times t i and t i 0 . If t
must simulate e i before e i 0 , otherwise we first simulate
. Thus we simulate in parallel the two next
BZ events e a and e b only if t
Otherwise, we must process sequentially
more BZ events in the region with lesser t i until the
above condition is reached. For each new BZ event
processed in a region R i the non-BZ events in the time
interval between two consecutive BZ events have to
Parallel Simulation [region R a
Initialisation;
while( not end condition )
begin-superstep
end-superstep
begin-superstep
Simulate events e k with time t k
so that t k t a and t k t a 0
if (t a t a 0 ) then
Simulate next BZ event e a ;
endif
end-superstep
endwhile
Figure
2: Conservative BSP simulation algorithm.
be simulated as well. This is described in the pseudo-code
for region R a shown in Figure 2. The operation
reads the value t a 0 of the element
a 0 stored in region R b .
4 Ising-spin systems
The ising-spin system is modeled as a
n \Theta
toroidal network. Every node i of the network is an
atom with magnetic spin value \Gamma1 or +1. Each atom
i attempts to change its spin value at discrete times
given by t is the time at
which the atom i has been currently simulated, and X
is a random variable with negative exponential distri-
bution. The new spin value of i is decided considering
the current spin values of its four neighbors. The goal
of the simulation is to process the occurrence of M
spin changes (events).
The sequential simulation of this system is trivial
since it is only necessary to deal with one type of
event and to use an efficient event-list to administer
the times t i(k+1) . Then the cost C 1 of processing each
event that takes place in the sequential algorithm is
O(logn) or even O(1) if a calendar queue were used [in
[2] it has been conjectured that the calendar queue has
O(1) cost under a work-load very similar to the one
produced by the ising-spin system]. The cost of the
sequential simulation of the whole system of n atoms
is then
In the case of the parallel simulation, the
toroidal network is divided into p p \Theta p p regions with
n=p) \Theta (a =
atoms each. For each
region there are a total of 4 (a \Gamma 1) atoms in the border
zone, i.e., f In each
region the same sequential event-list algorithm is applied
during the parallel phase, although it is executed
on a smaller number of atoms (n=p). The cost C p
of processing every event during the parallel phase is
event-list is used.
For each iteration, the cost of the parallel phase is
determined by the maximum number of events simulated
in any processor during that period. This number
is hard to determine. We optimistically assume
that on average a very similar number of events are
simulated by each processor. We are going to assume
that from the total of M simulated
in all of the parallel phases executed during the simu-
lation, a total of M are simulated by each
processor (this introduces a constant error since the
average maximum per iteration should be considered
here). Also we assume in our analysis that the M f bz
BZ events that take place are simulated sequentially
(hyper-conservative algorithm of Figure 1). Thus the
cost TP of the parallel simulation is given by
where TCS (p; g; l) is the cost in communication (g) and
synchronization (l) among the (p) processors generated
in each iteration.
To predict the performance of a BSP algorithm we
need to compare it with the fastest sequential algorithm
for the same problem. With this aim define the
speed-up S to be
shows that there exists a BSP algorithm
with total cost smaller than the
cost TS of the sequential alternative (i.e. in a BSP
algorithm we not only consider its computation cost
but also its cost in communication and synchronization
among processors).
For the case of the ising-spin model we have
Since C p C 1 , we can replace C p by C 1 to obtain
an upper bound for TCS required to achieve S ? 1 ,
which expressed as
shows that the effect of the cost TCS is essentially absorbed
with f bz and C 1 . That is, given a particular
machine (characterized by its parameters g and l) we
can always achieve a speedup S ? 1 for a sufficiently
large problem (characterized by its parameters f bz and
For example, in an extreme situation of a system
with very low C 1 to be simulated on an inefficient ma-
chine, say very high g and l, the only way to achieve
increasing the parallel slackness (by
increasing a and/or reducing p) enough to reduce the
effect of TCS (p;
For the hyper-conservative algorithm given in Figure
1 the cost TCS is dominated by the parallel prefix
operation, i.e., p). For
Ising-spin system
f exp
Table
2: Results on an 8-processor IBM/SP2.
the more efficient algorithm shown in Figure 2 this
cost depends on the number q of different processors
to which every processor has to communicate with in
order to decide whether or not to simulate its next BZ
event, namely (for the
case of the 2D ising-spin model we have q 2).
For the ising-spin model we can estimate bounds for
TCS which ensure S ? 1. To this end we substitute in
to obtain
O( a log p )
for the hyper-conservative algorithm. On the other
hand, if for the less conservative algorithm we assume
that p BZ events are processed in each iteration,
namely
and
then we obtain the better bound
Note that the more restricted case C leads to
the bounds O(a) and O(a p) respectively. Given the
bounds for g and l shown in Table 1 we can see that
the restriction (upper bound) for TCS is possible to
satisfy in practice. For example, running the hyper-
conservative algorithm on a 2D array computer would
require one to adjust a so that a = O( p p).
In
Table
2 we show empirical results for S. We
obtained S using the running time of the O(log n) sequential
algorithm and the less conservative parallel
algorithm in Figure 2. In the column 4 (t) we show the
fraction of BZ events per iteration, where a value 1.0
means that one BZ event is simulated in each processor
(the best case). This column (t) shows the average
among all the iterations.
5 Hard-disk fluids
Our second example is more complex, it consists of
a two-dimensional box of size L \Theta L which contains
hard-disks evenly distributed in it. After a random
assignment of velocities, the (non-obvious) problem
consists of simulating a total of NDDC elastic disk-disk
collisions (DDC events) in a running time as small as
possible. In this section we show that similar bounds
for TCS (although with much higher constant factors)
are required to simulate these systems on a BSP computer
efficiently.
To achieve an efficient sequential running time, the
whole box is divided into p n c \Theta p n c cells of size oe \Theta oe
with
d, where d is the diameter of each
disk. The box is periodical in the sense that every
time a disk runs out of the box through a boundary
wall, it re-enters the box at the opposite point. The
neighborhood of a disk i whose center is located in the
cell c, is composed of the cell c itself and the eight cells
immediately (periodical) adjacent to c. We define m to
be the average number of disks per cell. Since oe d,
a disk i can only collide with the 9 located
in the neighborhood of i. This reduces from O(n)
to O(logn) the cost associated with the simulation of
every DDC event that takes place since
regarded as a constant and we use a O(log n) event-list
to administer the pending events (collisions).
As the disks move freely between DDC events they
will eventually cross into neighboring cells. We regard
the instant when a disk i crosses from a cell c to a
neighboring cell c as a virtual wall collision (VWC)
event. Then each time a VWC event takes place it is
necessary to consider the possible collisions among i
and the disks located in the new cells that become part
of the neighborhood of i, i.e., the cells immediately
adjacent to c which are not adjacent to c (3 m disks
should be considered here). To consider the effect of
these events, we define to be the average number of
VWC events that take place between two consecutive
DDC events. So the goal of simulating NDDC DDC
events actually involves processing the occurrence of
events. Note that 1
represents
the probability that the next event to take place, in
a given instant of the simulated time, is a DDC event
whereas
is the probability that the next event is
a VWC event.
To perform the simulation it is necessary to maintain
for each disk i, updated information of the time t
of all the possible collisions between i and the disks j
located in the neighborhood of i. It is also necessary
to periodically update the time when i crosses to a
neighboring cell through a virtual wall w. These computations
are done in a pair-wise manner by considering
only the positions and velocities of the two objects
involved in the event being calculated. The outcome
is a dynamic set of event-tuples
represents a disk j or a virtual wall w and e indicates
a DDC or VWC event. At initialization, the
first future events (event-tuples) are predicted for the
n disks of the system and then new future events are
successively calculated as the simulation advances to
the end, namely every time a disk suffer a DDC or
event.
Notice that only a subset of all the events calculated
for each disk i are the ones that really occur during the
simulation, and it is not obvious how to identify these
events in advance. Different methods to cope with this
problem have been proposed in the literature [6, 10,
14, 20]. However the common principle is to use an
efficient data structure to maintain an event-list where
the future events are stored until they are removed to
take place or they are invalidated by earlier events;
a DDC event E(t 1 ; stored in the event-list
becomes invalidated if another DDC event E(t
place during the simulation.
We assume here that only one event E is actually
maintained for each disk i (the one with minimal time
E(t)) in the event-list, and if this event E becomes invalidated
a new event E 0 for i is calculated considering
the complete neighborhood of i. In other words, after
every DDC and VWC, and when an invalid event is
retrieved from the event-list, new collisions are calculated
considering the 9 located in the neighborhood
(this implies a fairly slower sequential simulation
but also simplifies its implementation and analysis).
Note that the fraction of invalid events which are retrieved
as the "next event" is less than 15% [14], so
we neglect this effect in our analysis as well.
After initialization, the simulation enters a basic
cycle essentially composed of the following operations:
(i) picking the chronological next event from the event-
list, (ii) updating the state(s) of the disk(s) involved
in the current event, (iii) calculating new events for
this (these) disk(s) (one VWC event and several DDC
events) and (iv) inserting one event in the event-list
per disk involved in the current event. These operations
are cyclically performed until some end condition
is reached (i.e., the occurrence of a border zone event
in the case of the parallel simulation).
The running time of the sequential algorithm can be
estimated as follows. Constant factors are neglected
by considering that each of the operations of updating
the position or velocity of a disk i, and calculating
one DDC event for i, are all single operations with
cost O(1). Also, for every disk i it is necessary to consider
the disks located in the neighborhood of
while calculating new DDC events for i. Calculating
a VWC takes time O(1) as well. The cost associated
with the event-list is log n per event insertion whereas
retrieving the next event is negligible. Selecting the
event with minimal time for a disk i is also negligible
since this can be done as the new events are calcu-
lated. This gives the costs 2 (3 log n) and
for the simulation of each DDC and
VWC event that takes place respectively. Then the
overall cost of the simulation of each DDC that takes
place is
Notice that C 1 includes the cost of the VWC events
that take place between two consecutive DDC events.
The total running T S of the sequential algorithm is
then
Using theory of hard-disk fluids, we have derivated
the following expression for [13],
is the disk-area density of the
system. Calculating d TDDC
we obtained the ex-
pressionm
which even for extreme conditions (e.g., ae = 0:01 and
On the other
hand, the restriction oe d imposes a lower bound for
m opt . Replacing
where in practice choosing
to an efficient simulation in terms of the total running
time and space used by the cells.
During the parallel phase every processor simulates
the evolution of the disks located in its own region. If
there are a total of p processors and an average of n=p
disks in every region, then by logarithmic property we
have
which is the average running time spent by each processor
computing the occurrence of two consecutive
DDC events in its region. We emphasize here that
hard-disk systems are by far more difficult to simulate
in parallel than ising-spin models. In particular,
it is necessary to cope with the problem that an event
scheduled for a particular disk may not occur at the
predicted time; this disk can be hit by a neighboring
disk in an earlier simulated time. This necessarily
leads one to deal with the possibility of "rollbacks"
where the whole simulation is re-started at some check
point passed without error.
For example, in the algorithm of Figure 2 after that
processor simulating region R a fetches time t a 0
from
region R b in superstep s, it might occur that processor
simulating region R b changes the value t a 0
in the next
invalidating in this way all the work
made by processor simulating R a during its parallel
phase (i.e., superstep s 1). To cope with this prob-
lem, we maintain an additional copy of the whole state
of the simulation. This state is an array with one entry
per disk. Each entry keeps disk's information such
as position, velocity and local simulation time. We
also use a single linked list to register each position of
the main state array that is modified during a complete
iteration (i.e., parallel phase plus synchronization
phase). If the above described problem occurs,
then we use the linked list to make the two state arrays
identical and repeat the iteration (s; s
a better estimation of t a 0 . See [13] for specific details
on the BSP simulation of hard-disk fluids (e.g.,
since a BSP computer is a distributed memory system
we maintain in each region a copy of the disks
located in the border zone of its neighboring regions,
thus the synchronization phase also involves the transference
of information among neighboring regions to
properly updated the states of these disk copies).
The regions to be simulated by each processor are
made up of a \Theta a cells with
Also we
define the BZ cells to be the 4 (a \Gamma 1) cells located
in the boundaries of every region. By studying the
probabilities of all the cases when a BZ event takes
place we can calculate f bz , which is a function of a
and other parameters of the hard-disk system. The
general expression for f bz is given by
where PDDCBZ and PVWCBZ are the probabilities of a
DDC and VWC event taking place in border zone res-
pectively. These probabilities can be calculated considering
that a given disk has the same probability of
being in an arbitrary cell and that its direction also
has the same probability. These calculations are a bit
involved because of the many cases to be considered.
Briefly, the expressions given below were obtained
by studying the two types of BZ events (DDC and
VWC) and the positions of the disk E(i) involved
in them. With probability
disk is located in a BZ cell whereas with probability
the disk is located in a cell
neighboring to a BZ cell. For a VWC event the disk
crosses to any cell with probability 1/4 whereas for
a DDC event the disk collides with a disk located in
any neighboring cell with probability 1/8. If the disk
is located in a cell neighboring to a BZ cell then with
probability the disk is located in a corner of
the box. In this case the probability of a BZ event is
5/8 for a DDC and 2/4 for a VWC. When the disk is
located out of the corner these last probabilities are
3/8 and 1/4 respectively. Similar considerations are
used when the disk is located inside a BZ cell for a
VWC. For DDC the probability of a BZ event is just
bz . By doing the weighted sum of all the cases we
have obtained
a 2
and
a
a 2
where Pm represents the probability of a DDC between
two disks located in the same cell. This probability
depends on the size of the cells oe, but for the purpose
of our analysis it is enough to say
f bz !a
O
'a
The running time TP of the parallel algorithm is
given by
where TPP is the time spent simulating NPE events
(DDC and VWC) during the parallel phase, and T SP
is the time spent in the synchronization phase simulating
one event plus the cost TCS associated with the
communication and synchronization among the pro-
cessors. Note that from the NPE events a total of
(1=(1+)) NPE are DDC events, and these events are
evenly distributed among the p processors, namely p
processors simultaneously simulate (1=(1 +)) NPE =p
DDC events. Then TPP is given by
''f bz
and TSP is given by
namely
and therefore
This last expression can be made more exigent for TCS
assuming to obtain
log n TCS
which shows that for a practical simulation with
1=a, the bound for TCS is similar
to those of the ising-spin system. On the other hand, if
we assume that p border zone events are simulated in
each iteration of the less conservative algorithm, then
we have
f bz
and
which leads to a bound similar to the one for the ising-
spin model as well. It is important to note that in the
calculations involved in the derivation of the speedup
S, we have been very conservative in the sense that
we are mixing BSP cost units with the ones defined
Hard-disk fluid
f exp
Table
3: Empirical results on the IBM/SP2.
by ourselves. Our basic unit of cost (updating disk
state or calculating a new event for a disk) is much
higher than the cost of each time step assumed for g
and l in the BSP model.
In
Table
3 we show empirical results for the hard-disk
fluid simulated with the less conservative algorithm
running on a IBM/SP2 parallel computer.
6 Final comments
In this paper we have derived upper bounds for
the cost of communication and synchronization among
processors in order to perform the efficient conservative
simulation of two system examples. We conclude
that it is possible to satisfy such bounds on current
parallel computers. Our empirical results confirm this
conclusion. We believe that the examples analyzed in
this paper exhibit sufficient generality and complexity
to be considered as representatives of a wide class of
systems where the events takes place randomly and
evenly distributed among their constituent elements.
The first example is a very simple system where
the links among neighboring regions (processors) are
maintained fixed during the whole simulation. How-
ever, the cost of each event processed in this system is
extremely low. This imposes harder requirements on
the cost of communication and synchronization (upper
bounds with much lower constant factors). The second
example is noticeably more complex because of the
dynamic nature of the system. Here the links among
regions change randomly during the simulation. Thus,
even for the hyper-conservative algorithm of Figure 1,
it is necessary to cope with the problem of roll-backs
since the scheduled events associated with each disk do
not necessarily occur at the predicted time. However
the simulated time progresses statistically at the same
rate in each region and the upper bounds for communication
and synchronization are similar to those of the
first simpler example (notice that the constant factors
are much higher in this second system example which
relaxes the requirements of these bounds).
All of our empirical results were obtained with the
less conservative algorithm shown in Figure 2 running
on an IBM/SP2. We could not obtain speedup S ? 1
with the hyper-conservative algorithm of Figure 1 running
under similar conditions. We emphasize, how-
ever, that these results were obtained for a particular
machine with fairly high g and l values. Only by increasing
the slackness to O(n 5 ) disks per processor
we obtained with the algorithm of
Figure
1 under the experiments described in Table 3.
Acknowledgements
The author has been supported by University of
Magallanes (Chile) and a Chilean scholarship.
--R
"Distributed simulation and time warp Part 1: Design of Colliding Pucks"
"Calendar queues: A fast O(1) priority queue implementation for the simulation event set problem"
"Discrete event simulation of object movement and interactions"
"The green BSP library"
"Per- formance of the colliding pucks simulation on the time warp operating system part 2: Asynchronous behavior & sectoring"
"An efficient algorithm for the hard-sphere problem"
"Efficient parallel simulations of dynamic ising spin systems"
"Efficient distributed event-driven simulations of multiple-loop networks"
"Simulating colliding rigid disks in parallel using bounded lag without time warp"
"How to simulate billiards and similars systems"
"Simulating billiards: Serially and in parallel"
"Direct BSP algorithms for parallel discrete-event simulation"
"Event-driven hard-particle molecular dynamics using bulk synchronous parallelism"
"Ef- ficient algorithms for many-body hard particle molecular dynamics"
"An empirical assessment of priority queues in event-driven molecular dynamics simulation"
"An object oriented C++ approach for discrete event simulation of complex and large systems of many moving ob- jects"
"General purpose parallel comput- ing"
"Parallel simulation of billiard balls using shared variables"
"The event scheduling problem in molecular dynamics simulation"
"Reduction of the event-list for molecular dynamic simulation"
"A bridging model for parallel com- putation"
"Distributed combat simulation and time warp: The model and its performance"
"Im- plementing a distributed combat simulation on the time warp operating system"
"Case studies in serial and parallel simulation"
--TR
Efficient parallel simulations of dynamic Ising spin systems
Calendar queues: a fast 0(1) priority queue implementation for the simulation event set problem
Implementing a distributed combat simulation on the Time Warp operating system
Efficient distributed event-driven simulations of multiple-loop networks
A bridging model for parallel computation
How to simulate billiards and similar systems
Discrete event simulation of object movement and interactions
General purpose parallel computing
Efficient algorithms for many-body hard particle molecular dynamics
An efficient algorithm for the hard-sphere problem
Parallel simulation of billiard balls using shared variables
An object oriented C++ approach for discrete event simulation of complex and large systems of many moving objects
--CTR
Wentong Cai , Emmanuelle Letertre , Stephen J. Turner, Dag consistent parallel simulation: a predictable and robust conservative algorithm, ACM SIGSIM Simulation Digest, v.27 n.1, p.178-181, July 1997 | billiards;general purpose parallel computing;discrete-event simulation;homogeneous large-scale systems;discrete event simulation;ising-spin models;scalable computation;colliding hard-spheres;bulk-synchronous parallel model |
269004 | Compositional refinement of interactive systems. | We introduce a method to describe systems and their components by functional specification techniques. We define notions of interface and interaction refinement for interactive systems and their components. These notions of refinement allow us to change both the syntactic (the number of channels and sorts of messages at the channels) and the semantic interface (causality flow between messages and interaction granularity) of an interactive system component. We prove that these notions of refinement are compositional with respect to sequential and parallel composition of system components, communication feedback and recursive declarations of system components. According to these proofs, refinements of networks can be accomplished in a modular way by refining their compponents. We generalize the notions of refinement to refining contexts. Finally, full abstraction for specifications is defined, and compositionality with respect to this abstraction is shown, too. | Introduction
A distributed interactive system consists of a family of interacting components.
For reducing the complexity of the development of distributed interactive systems
they are developed by a number of successive development steps. By each
step the system is described in more detail and closer to an implementation
level. We speak of levels of abstraction and of stepwise refinement in system
development.
When describing the behavior of system components by logical specification
techniques a simple concept of stepwise refinement is logical implication. Then
a system component specification is a refinement of a component specification,
if it exhibits all specified properties and possibly more. In fact, then refinement
allows the replacement of system specifications by more refined ones exhibiting
more specific properties.
More sophisticated notions of refinement allow to refine a system component
to one exhibiting quite different properties than the original one. In this case,
however, we need a concept relating the behaviors of the refined system component
to behaviors of the original one such that behaviors of the refined system
component can be understood to represent behaviors of the original one. The
behavior of interactive system components is basically given by their interaction
with their environment. Therefore the refinement of system components
basically has to deal with the refinement of their interaction. Such a notion of
interaction refinement is introduced in the following.
Concepts of refinement for software systems have been investigated since
the early 1970s. One of the origins of refinement concepts is data structure
refinement as treated in Hoare's pioneering paper [Hoare 72]. The ideas of
data structure refinement given there were further explored and developed (see,
for instance, [Jones 86], [Broy et al. 86], [Sannella 88], see [Coenen et al. 91]
for a survey). Also the idea of refining interacting systems has been treated
in numerous papers (see, for instance, [Lamport 83], [Abadi, Lamport 90], and
[Back 90]).
Typically distributed interactive systems are composed of a number of components
that interact for instance by exchanging messages or by updating shared
memory. Forms of composition allow to compose systems from smaller ones.
Basic forms of composition for systems are parallel and sequential composition,
communication feedback and recursion.
For a set of forms of composition a method for specifying system components
is called compositional (sometimes also the word modular is used), if the specification
of composed systems can be derived from the specifications of the constituent
components. We call a refinement concept compositional, if refinements
of a composed system are obtained by giving refinements for the components.
Traditionally, compositional notions of specification and refinement for concurrent
systems are considered hard to obtain. For instance, the elegant approach
of [Chandy, Misra 88] is not compositional with respect to liveness properties
and does not provide a compositional notion of refinement. Note, it makes only
sense to talk about compositionality with respect to a set of forms of composi-
tion. Forms of composition of system components define an algebra of systems,
also called a process algebra. Not all approaches to system specifications emphasise
forms of composition for systems. For instance, in state machine oriented
system specifications systems are modelled by state transitions. No particular
forms of composition of system components are used. As a consequence compositionality
is rated less significant there. Approaches being in favor of describing
systems using forms of composition are called "algebraic". A discussion of the
advantages and disadvantages of algebraic versus nonalgebraic approaches can
be found, for instance, in [Janssen et al. 91].
Finding compositional specification methods and compositional interaction
refinement concepts is considered a difficult issue. Compositional refinement
seems especially difficult to achieve for programming languages with tightly
coupled parallelism as it is the case in a "rendezvous" concept (like in CCS
and CSP). In tightly coupled parallelism the actions are directly used for the
synchronization of parallel activities. Therefore the granularity of the actions
cannot be refined, in general, without changing the synchronization structure
(see, for instance, [Aceto, Hennessy 91] and [Vogler 91]).
The presentation of a compositional notion of refinement where the granularity
of interaction can be refined is the overall objective of the following
sections. We use functional, purely descriptive, "nonoperational" specification
techniques. The behavior of distributed systems interacting by communication
over channels is represented by functions processing streams of messages.
Streams of messages represent communication histories on channels. System
component specifications are predicates characterizing sets of stream processing
functions. System components described that way can be composed and decomposed
using the above mentioned forms of composition such as sequential
and parallel composition as well as communication feedback. With these forms
of composition all kinds of finite data processing nets can be described. Allowing
in addition recursive declarations even infinite data processing nets can be
described.
In the following concepts of refinement for interactive system components
are defined that allow one to change both the number of channels of a component
as well as the granularity of the messages sent by it. In particular,
basic theorems are proved that show that the introduced notion of refinement is
compositional for the basic compositional forms as well as for recursive declara-
tions. Accordingly for an arbitrary net of interacting components a refinement
is schematically obtained by giving refinements for its components. The correctness
of such a refinement follows according to the proved theorems schematically
from the correctness proofs for the refinements of the components.
We give examples for illustrating the compositionality of refinement. We
deliberately have chosen very simple examples to keep their specifications small
such that we can concentrate on the refinement aspects. The simplicity of these
examples does not mean that much more complex examples cannot be treated.
Finally we generalize our notion of refinement to refining contexts. Refining
contexts allow refinements of components where the refined presentation of the
input history may depend on the output history. This allows in particular
to understand unreliable components as refinements of reliable components as
long as the refining context takes care of the unreliability. Refining contexts are
represented by predicate transformers with special properties. We give examples
for refining contexts.
In an appendix full abstraction of functional specifications for the considered
composing forms is treated.
Specification
In this section we introduce the basic notions for functional system models and
functional system specifications. In the following we study system components
that exchange messages asynchronously via channels. A stream represents a
communication history for a channel. A stream of messages over a given message
set M is a finite or infinite sequence of messages. We define
We briefly repeat the basic concepts from the theory of streams that we shall
use later. More comprehensive explanations can be found in [Broy 90].
ffl By x - y we denote the result of concatenating two streams x and y. We
assume that x -
ffl By hi we denote the empty stream.
ffl If a stream x is a prefix of a stream y, we write x v y. The relation v is
called prefix order. It is formally specified by
ffl By (M ! ) n we denote tuples of n streams. The prefix ordering on streams
as well as the concatenation of streams is extended to tuples of streams
by elementwise application.
A tuple of finite streams represents a partial communication history for a tuple
of channels. A tuple of infinite streams represents a total communication history
for a tuple of channels.
The behavior of deterministic interactive systems with n input channels and
output channels is modeled by (n; m)-ary stream processing functions
A stream processing function determines the output history for a given communication
history for the input channels in terms of tuples of streams.
Example 1 Stream processing function
Let a set D of data elements be given and let the set of messages M be specified
by:
Here the symbol ? is a signal representing a request. For data elements
a stream processing function
is specified by
The function (c:d) describes the behavior of a simple storage cell that can store
exactly one data element. Initially d is stored. The behavior of the component
modeled by (c:d) can be illustrated by an example input
The function (c:d) is a simple example of a stream processing function where
every input message triggers exactly one output message.
End of example
In the following we use some notions from domain and fixed point theory that
are briefly listed:
ffl A stream processing function is called prefix monotonic, if for all tuples of
streams
We denote the function application f(x) by f:x to avoid brackets.
ffl By tS we denote a least upper bound of a set S, if it exists.
ffl A set S is called directed, if for any pair of elements x and y in S there
exists an upper bound of x and y in S.
ffl A partially ordered set is called complete, if every directed subset has a
least upper bound.
ffl A stream processing function f is called prefix continuous, if f is prefix
monotonic and for every directed set S ' M ! we have:
The set of streams as well as the set of tuples of streams are complete. For
every directed set of streams there exists a least upper bound.
We model the behavior of interactive system components by sets of continuous
(and therefore by definition also monotonic) stream processing functions.
Monotonicity models causality between input and output. Continuity models
the fact that for every behavior the system's reaction to infinite input can be
predicted from the component's reactions to all finite prefixes of this input 1 .
Monotonicity takes care of the fact that in an interactive system output already
produced cannot be changed when further input arrives. The empty stream is to
be seen as representing the information "further communication unspecified".
Note, in the example above by the preimposed monotonicity of the function
(c:d) we conclude otherwise, we could construct a contradiction.
A specification describes a set of stream processing functions that represent
the behaviors of the specified systems. If this set is empty, the specification is
called inconsistent , otherwise it is called consistent . If the set contains exactly
one element, then the specification is called determined. If this set has more
then one element, then the specification is called underdetermined and we also
speak of underspecification. As we shall see, an underdetermined specification
may be refined into a determined one. An underdetermined specification can
also be used to describe hardware or software units that are nondeterministic.
An executable system is called nondeterministic, if it is underdetermined. Then
the underspecification in the description of the behaviors of a nondeterministic
system allows nondeterministic choices carried out during the execution of the
system. In the descriptive modeling of interactive systems there is no difference
in principle between underspecification und the operational notion of nondeter-
minism. In particular, it does not make any difference in such a framework,
whether these nondeterministic choices are taken before the execution starts or
step by step during the execution.
The set of all (n,m)-ary prefix continuous stream processing functions is
denoted by
The number and sorts of input channels as well as output channels of a specification
are called the component's syntactic interface. The behavior, represented
by the set of functions that fulfill a specification, is called the component's semantic
interface. The semantic interface includes in particular the granularity
of the interaction and the causality between input and output. For simplicity
we do not consider specific sort information for the individual channels of components
in the following and just assume M to be a set of messages. However,
all our results carry over straightforwardly to stream processing functions where
more specific sorts are attached to the individual channels.
This does not exclude the specification of more elaborate liveness properties including
fairness. Note, fairness is, in general, a property that has to do with "fair" choices between
an infinite number of behaviors.
Figure
1: Graphical representation of a component Q
A specification of a possibly underdetermined interactive system component
with n input channels and m output channels is modeled by a predicate
characterizing prefix continuous stream processing functions. Q is called an
(n; m)-ary system's specification. A graphical representation of an (n; m)-ary
system component Q is given in Figure 1. The set of specifications of this form
is denoted by
Example 2 Specification
A component called C (for storage Cell) with just one input channel and one
output channel is specified by the predicate C. The component C can be seen
as a simple store that can store exactly one data element. C specifies functions
f of the functionality:
Let the sets D and M be specified as in example 1. If C receives a data element
it sends a copy on its output channels. If it receives a request represented by
the signal ?, it repeats its last data output followed by the signal ? to indicate
that this is repeated output. The signal ? is this way used for indicating a "read
storage content request". The signal ? triggers the read operation. A data
element in the input stream changes the content of the store. The message d
triggers the write operation. Initially the cell carries an arbitrary data element.
This behavior is formalized by the following specification for C:
where the auxiliary function (c:d) is specified as in example 1. Notice that the
data element stored initially is not specified and thus component C is underdetermined
End of example
For a deterministic specification Q where for exactly one function q the predicate
Q is fulfilled, in other words where we have
we often write (by misuse of notation) simply q instead of Q. This way we
identify determined specifications and their behaviors.
m we denote the identity function; that is we assume
We shall drop the index m for I m whenever it can be avoided without confusion.
m we denote the function that produces for every input just
the empty stream as output on all its output channels; that is we define
Similarly we write y m for the unique function in SPF m
other words the
function with m input channels, but with no output channels.
By / L n
m we denote the logically weakest specification, which is the
specification that is fulfilled by all stream processing functions. It is defined by
By n
\Upsilon we denote the function that produces two copies of its input. We have
2n and
\Upsilon
By
n+m we denote the function that permutes its input streams as
Again we shall drop the index n as well as m
\Upsilon whenever it
can be avoided without confusion.
Composition
In this section we introduce the basic forms of composition namely sequential
composition, parallel composition and feedback. These compositional forms are
introduced for functions first and then extended to component specifications.
3.1 Composition of Functions
Given functions
we write
for the sequential composition of the functions f and g which yields a function
in SPF n
Given functions
we write
fkg
for the parallel composition of the functions f and g which yields a function in
We assume that " ; " has higher priority than "k". Given a function
we write
-f
for the feedback of the output streams of function f to its input channels which
yields a function in SPF n
Here fix denotes the fixed point operator associating with any monotonic function
f its least fixed point fix:f . Thus means that y is with respect
the prefix ordering the least solution of the equation We assume
that "-" has higher priority than the binary operators ";" and "k". A graphical
representation for feedback is given in Figure 2.
We obtain a number of useful rules by the fixed point definition of -f . As a
simple consequence of the fixed point characterization, we get the unfold rules:
A graphical representation of the unfold rules for feedback is given in Figure 3.
-f
Figure
2: Graphical representation of feedback
f
f
-f
f
f
Figure
3: Graphical representation of the unfold rules for feedback
f
Figure
4: Graphical representation of semiunfold
A useful rule for feedback is semiunfold that allows one to move components
outside or inside the feedback loop (let g 2 SPF m
A graphical representation for semiunfold is given in Figure 4.
For reasoning about feedback loops and fixed points the following special
case of semiunfold is often useful:
The rule is an instance of semiunfold with y. The correctness of
this rule can also be seen by the following argument: if y is the least fixed point
of
and e y is the least fixed point of
then e
Semiunfold is a powerful rule when reasoning about results of feedback loops.
3.2 Composition of Specifications
We want to compose specifications of components to networks. The forms of
composition introduced for functions can be extended to component specifications
in a straightforward way. Given component specifications
we write
for the predicate in SPEC n
Trivially we have for all specifications Q 2 SPEC n
m the following equations:
Given specifications
we write
QkR
for the predicate in SPEC n1+n2
m1+m2 where
Given specification
we write
for the predicate in SPEC n
For feedback over underdetermined specifications we get the following rules
2 For determined system specifications Q we get the stronger rules
and which do not hold for underdetermined systems, in general. The
erroneous assumption that these rules are valid also for underdetermined systems is the source
for the merge anomaly (see [Brock, Ackermann 81]).
A useful rule for feedback is fusion that allows one to move components that are
not affected by the feedback outside or inside the feedback operator application.
With the help of the basic functions and the forms of composition introduced
so far we can represent all kinds of finite networks of systems (data flow nets) 3 .
The introduced composing forms lead to an algebra of system descriptions.
4 Refinement, Representation, Abstraction
In this section we introduce concepts of refinement for system components both
with respect to the properties of their behaviors as well as with respect to their
syntactic interface and granularity of interaction.
We start by defining a straightforward notion of property refinement for
system component specifications. Then we introduce a notion of refinement
for communication histories. Based on this notion we define the concept of
interaction refinement for interactive components. This notion allows to refine
a component by changing the number of input and output channels as well as
the granularity of the exchanged messages.
4.1 Property Refinement
Specifications are predicates characterizing functions. This leads to a simple
notion of refinement of component specifications by adding logical properties.
Given specifications
e
Q is called a (property) refinement of Q
if for all f 2 SPF n
e
Then we write
e
If e
Q is a property refinement for Q, then e
Q has all the properties Q has and
may be some more. Every behavior that e
Q shows is also a possible behavior of
Q.
3 Of course, the introduced combinatorial style for defining networks is not always very
useful, in practice, since the combinatorial formulas are hard to read. However, we prefer
throughout this report to work with these combinatorial formulas, since this puts emphasis
on the compositional forms and the structure of composition. For practical purposesa notation
with named channels is often more adequate.
All considered composing forms are monotonic for the refinement relation as
indicated by the following theorem.
Theorem 1 (Compositionality of Refinement)
Proof: Straightforward, since all operators for specifications are defined point-wise
on the sets of functions that are specified. 2
A simple example of a property refinement is obtained for the component C as
described in Example 2 on page 8 if we add properties about the data element
initially stored in the cell. A property refinement does not allow one to change
the syntactic interface of a component, however.
4.2 Interaction Refinement
Recall from section 2 that streams model communication histories on channels.
In more sophisticated development steps for a component the number of channels
and the sorts of messages on channels are changed. Such steps do not
represent property refinements. Therefore we introduce a more general notion
of refinement. To be able to do this we study concepts of representation of
communication histories on n channels modeled by a tuple of n streams by
communication histories on m channels modeled by a tuple of m streams.
Tuples of streams y can be seen as representations of tuples of
streams we introduce a mapping ae 2 SPF n
m that associates
with every x its representation. ae is called a representation function. If ae is
injective then it is called a definite representation function. Note, a mapping ae
is injective, if and only if:
If a specification R 2 SPEC n
m is used for the specification of a set of representation
functions, R is called a representation specification.
Example 3 Representation Specification
We specify a representation specification R allowing the representation of streams
of data elements and requests by two separate streams, one of which carries the
requests and the other of which carries the data elements. The representation
functions are mappings ae of the following functionality:
Here
p is used as a separator signal. It can be understood as a time tick that
separates messages. Given streams x and y let [x; y] denote a pair of streams and
the elementwise concatenation of pairs of streams, in other words:
Let T icks be defined by the set of pairs of streams of ticks that have equal
We specify the representation specification R explicitly as follows:
Note, by the monotonicity of the specified functions:
The computation of a representation is illustrated by the following example:
The example demonstrates how the time ticks are used to indicate in the streams
ae(x) the order of the requests relatively to the data messages in the original
stream x.
End of example
The elements in the images of the functions ae with R:ae are called representations.
representation specification) A representation
specification R is called definite, if
In other words R is definite, if different streams x are always differently represented
Obviously, if R is a definite representation specification, then all functions ae
with R:ae are definite. For definite representation specifications for elements x
and x with x 6= x the sets of representation elements
are disjoint. Note, the representation specification given in the example above
is definite.
For every injective function, and thus for every definite representation function
ae, there exists a function ff 2 SPF m
n such that:
The function ff is an inverse to ae on the image of ae. The function ff is called
an abstraction for ae. Notice that ff is not uniquely determined as long as ae is
not surjective. In other words, as long as not all elements in (M are used
as representations of elements in (M ! ) n there may be several functions ff with
A:ff.
The concept of abstractions for definite representation functions can be extended
to definite representation specifications.
m be a definite representation
specification; a function ff 2 SPF m
n with
is called an abstraction function for R.
The existence of abstractions follows from the definition of definite representation
specification. again for definite representation specifications the abstraction
functions ff are uniquely determined only on the image of R, that is on the
union of the images of functions ae with R:ae.
Definition 3 (Abstraction for a definite representation specification)
n be the specification with
Then A is called the abstraction for R.
For consistent definite representation specifications R with abstraction A we
have
If ae; contains all possible choices of representation functions
for the abstraction A.
Example 4 Abstraction
For the representation specification R described in example 3 the abstraction
functions ff are mappings of the functionality:
The specification of A reads as follows.
It is a straightforward rewriting proof that indeed:
The specification A shows a considerable amount of underspecification, since
not all pairs of streams in f?;
are used as representations.
End of example
Parallel and sequential composition of definite representations leads to definite
representations again.
Theorem 2 Let R
be definite representation specifications for
(assuming in the second formula) are definite representation specifications
Proof: Sequential and parallel composition of injective functions leads to injective
functions. 2
Trivially we can obtain the abstractions of the composed representations by
composing the abstractions.
For many applications, representation specifications are neither required to
be determined nor even definite. For an indefinite representation specification
sets of representation elements for different elements are not necessarily disjoint.
Certain representation elements y do occur in several sets of representations for
elements. They ambiguously stand for ("represent") different elements. Such
an element may represent the streams x as well as x, if ae:x = ae:x for functions
ae and ae with R:ae and R:ae. For indefinite representation specifications the represented
elements are not uniquely determined by the representation elements.
A representation element y stands for the set
For a definite representation specification R this set contains exactly one element
while for an indefinite representation specification R this set may contain more
than one element. In the latter case, of course, abstraction functions ff with
I do not exist.
However, even for certain indefinite representations we can introduce the
concept of an abstraction.
Definition 4 (Uniform representation specifications) A consistent specification
m is called a uniform representation specification, if there
exists a specification A 2 SPEC m
n such that for all ae:
The specification A is called again the abstraction for R.
The formula expresses that (R; A) is a left-neutral element for every representation
function in R. Essentially the existence of an abstraction expresses the
following property of R: if for different elements x and x the same representations
are possible, then every representation function maps these elements onto
equal representations. More formally stated, if there exist functions e
ae and ae
with R:eae and R:ae such that
e
then for all functions ae with R:ae:
Thus if elements are identified by some representation functions, this identification
is present in all representation functions. The same amount of information
is "forgotten" by all the representations. The representation functions then are
indefinite in a uniform way. Definite representations are always uniform.
A function is injective, if for all x and x we have:
A function that is not injective ae defines a nontrivial partition on its domain.
A representation specification is uniform if and only if all functions ae with R:ae
define the same partition.
For a uniform representation specification R with abstraction A the product
(R; reflects the underspecification in the choices of the representations provided
by R. If for a function fl with (R; A):fl we have
have the same representations.
Definition 5 (Adequate representation) A uniform representation specification
R with abstraction A is called adequate for a specification Q, if:
Adequacy means that the underspecification in (R; A) does not introduce more
underspecification into Q; R; A than already present in Q. Note, definite representations
are adequate for all specifications Q.
Definition 6 (Interaction refinement) Given representations R 2 SPEC n
m and specifications b
m we say that b
Q is an
interaction refinement of Q for the representation specifications R and R, if
R
R
Figure
5: Commuting diagram of interaction refinement
This definition indicates that we can replace via an interaction refinement a
system of the form Q; R by a refined system of the form R; b
Q. We may think
about the relationship between Q and b
Q as follows: the specification Q specifies
a component on a more abstract level while Q 0 gives a specification for the
component at a more concrete level. Instead of computing at the abstract level
with Q and then translating the output via R onto the output representation
level, we may translate the input by R onto the input representation level and
compute with b
Q. We obtain one of these famous commuting diagrams as shown
in
Figure
5.
Definition 7 (Adequate interaction refinement) The interaction refinement
of Q for the representation specifications R and R is called adequate for
a specification Q, if R is adequate for Q.
For adequate interaction refinements using uniform representation specifications
R with abstraction A 2 SPEC m
since from the interaction refinement property we get
and by the adequacy of R for Q
which shows that R; b
Q; A is a (property) refinement of Q. A graphical illustration
of adequate interaction refinement is shown in Figure 6.
R
Figure
Commuting diagram of interaction refinement
The following table summarizes the most important definitions introduced
so far.
Table of definitions
e
property refinement of Q e
R consistent, definite with abstr. A R;
R uniform with abstraction A R:ae ) R;
R adequate for Q with abs. A Q; R; A ) Q
Inter. refinement b
Q of Q for R; R R; b
Adequate inter. refinement R uniform and adequate for Q
The notion of interaction refinement allows one to change both the syntactic
and the semantic interface. The syntactic interface is determined by the number
and sorts of channels; the semantic interface is determined by the behavior of
the component represented by the causality between input and output and by
the granularity of the interaction.
Example 5 Interaction Refinement
We refine the component C as given in Example 2 into a component b
C that has
instead of one input and one output channel two input and two output channels.
The refinement b
C uses one of its channels carrying the signal ? as a read channel
and one of its channels carrying data as a write channel. Let R and A be given
as specified in the examples above
We specify the interaction refinement b
C of C explicitly. b
C specifies functions
of functionality:
We specify:
where the auxiliary function h is specified by:
It is a straightforward proof to show:
Assume ae with R:ae and h such that there exist f and d with b
C:f and
we prove by induction on the length of the stream x that there exist e ae with R:eae
and c:d as specified in example 1 such that:
ae:(c:d):x
For we obtain: there exists t 2 T icks such that:
e
e
ae:(c:d):x
Now assume the hypothesis holds for x; there exists t 2 T icks:
There exists t 2 T icks:
This concludes the proof for finite streams x. By the continuity of h and ae the
proof is extended to infinite x.
End of example
Continuing with the system development after an adequate interaction refinement
of a component we may decide to leave R and A unchanged and carry on
by just further refining b
Q.
5 Compositionality of Interaction Refinement
Large nets of interacting components can be constructed by the introduced
forms of composition. When refining such large nets it is decisive for keeping
the work manageable that interaction refinements of the components lead to
interaction refinements of the composed system.
In the following we prove that interaction refinement is indeed compositional
for the introduced composing forms that is sequential and parallel composition,
and communication feedback.
5.1 Sequential and Parallel Composition
For systems composed by sequential compositions, refinements can be constructed
by refining their components.
Theorem 3 (Compositionality of refinement, seq. composition) Assume
is an interaction refinement of Q i for the representations R i\Gamma1 and R i
is an interaction refinement of Q for the representations
R 0 and R 2 .
Proof: A straightforward derivation shows the theorem:
interaction refinement of Q 1 g
interaction refinement of Q 2 g
Example 6 Compositionality of Refinement for Sequential Composition
Let C and b
C be specified as in the example above. Of course, we may compose
C as well as b
C sequentially. We define the components CC and d
CC by:
Note, CC is a cell that repeats its last input twice on a signal ?. It is a straight-forward
application of our theorem of the compositionality of refinement that
d
CC is a refinement of CC :
Of course, since R; A = I we also have that R; d
CC;A is a property refinement
of CC.
End of example
Refinement is compositional for parallel composition, too.
Theorem 4 (Compositionality of refinement for parallel composition)
Assume b
is an interaction refinement of Q i for the representations R i and R i
is an interaction refinement of Q 1 kQ 2 for the representations
Proof: A straightforward derivation shows the theorem:
(R
sequential and parallel compositiong
(R
interaction refinement for
sequential and parallel compositiong
(R 1 kR 2 )For sequential and parallel composition compositionality of refinement is quite
straightforward. This can be seen from the simplicity of the proofs.
5.2 Feedback
For the feedback operator, refinement is not immediately compositional. We
do not obtain, in general, that - b
Q is an interaction refinement of -Q for the
representations R and R provided b
Q is an interaction refinement of Q for the
representations RkR and R. This is true, however, if I ) (A; R) (see below).
The reason is as follows. In the feedback loops of - b
Q we cannot be sure that
only representations of streams (i.e. streams in the images of some of the functions
characterized by R) occur. Therefore, we have to give a slightly more
complicated scheme of refinement for feedback.
Theorem 5 (Compositionality of refinement, feedback) Assume b
Q is an
interaction refinement of Q for the representation specifications RkR and R
where R is uniform; then -((IkA; R); b
Q) is an interaction refinement of -Q for
the representations R and R.
Proof: We prove:
(R; -((IkA; R); b
From
(R; -((IkA; R); b
we conclude that there exist functions ae, b
q, ae, and ff such that R:ae, b
Q:bq, R:ae,
and A:ff and furthermore
Q is an interaction refinement of Q for the representations RkR and R
for functions ae with R:ae and ae with R:ae and -
q with b
Q:q there exist functions q
and e ae such that Q:q and R:eae hold and furthermore
ae
Given x, because of the continuity of ae, b
q, ae, and ff, we may define -((Ikff; ae); b q):ae:x
by tby i where
Moreover, because of the continuity of q, we may define ~
ae:(-q):x by ~
ae: ty i where
We prove:
e
ae: t y
by computational induction. We prove by induction on i the following proposition
ae:y
is the least elementg
e
fy 0 is the least elementg
e
fy 0 is the least elementg
fdefinition of b
Assume now the proposition holds for i; then we obtain:
fdefinition of b y
finduction hypothesisg
e
fdefinition of y
e
ae:y
Furthermore we get:
e
fdefinition of y
e
finduction hypothesisg
fdefinition of b y i+2 g
b y i+2
?From this we conclude by the continuity of e
ae that:
and thus
and finally
q))Assuming an adequate refinement allows us to obtain immediately the following
corollary.
Theorem 6 (Compositionality of adequate refinement, feedback) Assume
Q is an adequate interaction refinement of Q for the representations RkR
and R with abstraction A then -( b
Q; A; R) is an interaction refinement of -Q
for the representations R and R.
Proof: Let all the definitions be as in the proof of the previous theorem. Since
the interaction refinement is assumed to be adequate there exists a function e
with Q:q such that
Carrying out the proof of the previous theorem with e q instead of q and ae instead
of e ae we get:
By straightforward computational induction we may prove
This concludes the proof. 2
Assuming that A; R contains the identity as a refinement we can simplify the
refinement of feedback loops.
Theorem 7 Assume b
Q is an interaction refinement of Q for the representations
RkR and R with abstraction A and assume furthermore
I ) A; R
Q is an interaction refinement of -Q for the representations R and R.
Proof: Straightforward deduction shows:
-Q; RNote, even if I is not a refinement of A; R, in other words even if I ) A; R
does not hold, other refinements of A; R may be used to simplify and refine the
term A; R in -((IkA; R); b
Q). By the fusion rule for feedback as introduced in
section 3 we obtain:
This may allow further refinements for b
Example 7 Compositionality of Refinement for Feedback
Let us introduce the component F with two input channels and one output
channel. It specifies functions of the following functionality:
F is specified as follows:
where the auxiliary function g is specified by
It is a straightforward proof that for the specification C as defined in Example
1:
We carry out this proof by induction on the length of the input streams x. We
show that -f fulfills the defining equations for functions c:d in the definition
of C in Example 2. Let f be a function with F:f and g be a function as
specified above in the definition of F . We have to consider just two cases: by
the definition of f there exists g as defined above such that: there exists d:
Induction on the length of x and the continuity of the function g conclude the
proof.
The refinement b
F of F according to the representation specification R from
example 3 specifies functions of the functionality:
It reads as follows:
where the auxiliary function g is specified by
We have (again, this can be proved by a straightforward rewrite proof):
Moreover, we have according to Theorem 5:
and therefore
Note, the refinement is definite and therefore adequate for F . Therefore we may
replace -((IkA; R); b
F ) by -( b
The component -( b
can be further refined by refining A; R. Let us,
therefore, look for a simplification for A; R. We do not have
I ) A; R
since by the monotonicity of all ff with A:ff we have:
(otherwise we obtain a contradiction, since by monotonicity the first elements
of ff(x; d - y) have to coincide for all x and y). Therefore for all ae with R:ae:
This indicates that there are no functions ae and ff with R:ae and A:ff such that
is valid for all x. We therefore cannot simply refine A; R into I.
We continue the refinement by refining p. We take into account properties
of b
F . A simple rewriting proof shows:
Summarizing our refinements we obtain:
This concludes our example of refinement for feedback.
End of example
Recall that every finite network can be represented by an expression that is
built by the introduced forms of composition. The theorems show that a network
can be refined by defining representation specifications for the channels and by
refining all its components. This provides a modular method of refinement for
networks.
6 Recursively defined Specifications
Often the behavior of interactive components is specified by recursion. Given a
function
a recursive declaration of a component specification Q is given by a declaration
based on - :
Recursive specifications are restricted in the following to functions - that exhibit
certain properties.
6.1 Semantics of Recursively Defined Specifications
A function - where
is monotonic with respect to implication, if:
A set of specifications is called a chain, if for all i 2 IN and for all
functions f 2 SPF n
A function - is continuous with respect to implication, if for every chain
Note, the set of all specifications forms a complete lattice.
Definition 8 (Predicate transformer) A predicate transformer is a func-
tion
that is monotonic and continuous with respect to implication (refinement).
Note, if - is defined by - Net(X) is a finite network composed
of basic component specifications by the introduced forms of composition,
then - is a predicate transformer.
A recursive declaration of a component specification Q is given by a defining
equation (often called the fixed point equation) based on a predicate transformer
A predicate Q is called a fixed point of - if:
In general, for a function - there exist several predicates Q that are fixed points
of - . In fixed point theory a partial order on the domain of - is established
such that every monotonic function - has a least fixed point. This fixed point is
associated with the identifier f by a recursive declaration of the form
For defining the semantics of programming languages the choice of the ordering,
which determines the notion of the least fixed point, has to take into account
operational considerations. There the ordering used in the fixed point construction
has to reflect the stepwise approximation of a result by the execution. For
specifications such operational constraints are less significant.
Therefore we choose a very liberal interpretation for recursive declarations
of specifications in the following. For doing so we define the concept of an upper
closure of a specification. The upper closure is again a predicate transformer:
It is defined by the following equation:
Notice that \Xi is a classical closure operator, since it has the following characteristic
properties:
A predicate Q is called upward closed, if by \Xi the least
element\Omega is mapped onto the specification / L that is fulfilled by every function,
that is
From a methodological point of view it is sufficient to restrict
our attention to specifications that are upward closed 4 . This methodological
consideration and the considerable simplification of the formal interpretation
of recursive declarations are the reasons for considering only upward closed
solutions of recursive equations.
A predicate transformer - is called upward closed, if for all predicates Q we
By the recursive declaration
4 Taking the upper closure for a specification may change its safety properties. However,
only safety properties for those behaviors may be changed where the further output, independent
of further input, is empty. A system with such a behavior does not produce a specific
message on an output channel, even, if we increase the streams of the messages on the input
channels. Then what output is produced on that channel obviously is not relevant at all.
we associate with Q the predicate that fulfills the following equation:
where the predicates Q i are specified by:
According to this definition we associate with a recursive declaration the logically
weakest 5 predicate Q such that
The predicate Q is then denoted by fix:- .
6.2 Refinement of Recursively Specified Components
A uniform representation specification R with abstraction A is called adequate
for the predicate transformer - , if for all predicates X:
Adequacy implies that specifications for which R is adequate are mapped by -
onto specifications by for which R is adequate again.
Uniform interaction refinement is compositional for recursive definitions based
on predicate transformers for which the refinement is adequate. again definite
representations are always adequate.
Theorem 8 (Compositionality of refinement for recursion) Let representation
specifications R and R be given, where R is uniform with abstraction
A and adequate for the predicate transformer
For a predicate transformer
where
and for all predicates X; b
(R; b
we have
5 True is considered weaker than false.
Proof: Without loss of generality assume that the predicate transformers - and
are upward closed. Define
We prove:
This proposition is obtained by a straightforward induction proof on i. For
we have to show:
which is trivially true, since /
L holds for all functions. The induction step reads
as follows: from
we conclude by the adequacy of - :
fdefinition of Q
induction hypothesisg
fdefinition of Q
We prove by induction on i:
For we have to prove:
This is part of our premises. Now assume the induction hypothesis holds for
trivially
Therefore, with
by our premise we have:
By the induction hypothesis and by the fact
as can be seen by the derivation
We obtain:
- with
representations R the premise
is always valid as the following straightforward derivation shows:
fdefinition of / Lg
We immediately obtain the following theorem as corollary. It can be useful for
simplifying the refinement of recursion.
Theorem 9 Given the premisses of the theorem above and in addition
I ) A; R
we have
Proof: The theorem is proved by a straightforward deduction:
even if I is not a refinement of A; R, that is even if I ) A; R does not
hold, other refinements of A; R may be used to simplify the term A; R in the
specification.
Example 8 Compositionality of Refinement for Recursion
Of course, instead of giving a feedback loop as in example 7 above we may also
define an infinite network recursively by 6 :
where
again we obtain (as a straightforward proof along the lines of the proof above
for
It is also a straightforward proof to show that
(R; b
where
F
Therefore we have
where
by our compositionality results. again A; R can be replaced by its refinement
as shown above.
End of example
Using recursion we may define even infinite nets. The theorem above shows
that a refinement of an infinite net that is described by a recursive equation is
obtained by refinement of the components of the net.
7 Predicate Transformers as Refinements
So far we have considered the refinement of components by refining on one hand
their tuples of input and on the other hand their tuples of output streams. A
more general notion of refinement is obtained by considering predicate transformers
themselves as refinements.
Definition 9 (Refining context) A predicate transformer
6 The predicate transformer - is obtained by the unfold rule for feedback
is called a refining context, if there exists a mapping
called abstracting context such that for all predicates X we have:
Refining contexts can be used to define a quite general notion of refinement.
(Refinement by refining contexts) Let R be a refining context
with abstracting context A. A specification b
Q is then called a refinement for
the abstracting context A of the specification Q, if:
Note, R:Q is a refinement of the specification Q for the abstracting context A.
Refining contexts may be defined by the compositional forms introduced in the
previous sections.
Example 9 Refining Contexts
For component specifications Y with one input channel and two output channels
we define a predicate transformer
1by the equation:
where the component P specifies functions
A graphical representation of A:Y is given in Figure 7. Let P be specified by:
For a component specification X with one input channel and one output
channel we define a predicate transformer:
where the component Q specifies functions
Y
A:Y
Figure
7: Graphical representation of A:Y
Let Q be specified by:
stand for the finite stream of length k containing just copies of the
message m. To show that A and R define a refining context we show that:
which is equivalent to showing that for all specifications X:
This is equivalent to:
which is equivalent to the formula:
which can be shown by a proof based on the specifications of P and Q. Let
. stand for (I 2 ky) and & stand for the function (ykI 1 ). For functions p and q
with P:p and Q:q there exists k 2 IN such that 8i 2 IN with i - k:
This can be shown by a straightforward proof of induction on i. By this we
obtain for
Furthermore:
We obtain
By induction on the length on x and the continuity of the involved functions
the proposition above is proved.
End of example
Context refinement is indeed a generalization of interaction refinement. Given
two pairs of definite representation and abstraction specifications R; A and R; A
by
a refining context and an abstracting context is defined, since
Refining contexts lead to a more general notion of refinement than interaction
refinement. There are specifications Q and b
Q such that there do not exist
consistent specifications R and A where
but there may exist refining contexts R and A such that
Refining contexts may support the usage of sophisticated feedback loops between
the refined system and the refining context. This way a dependency between
the representation of the input history and the output history can be achieved.
QbHe
Figure
8: Graphical representation of the master/slave system
A very general form of a refining context is obtained by a special operator
for forming networks called master/slave systems. For notational convenience
we introduce a special notation for master/slave systems. A graphical representation
of master/slave systems is given in Figure 8. A master/slave system is
denoted by QbHe. It consists of two components Q and H called the master
k+n and the slave H 2 SPEC n
m . Then QbHe 2 SPEC i
. All the
input of the slave is comes via the master and all the output of the slave goes
to the master. The master/slave system is defined as follows:
or in a more readable notation:
where 8x;
We can define a refining context and an abstracting context based on the mas-
ter/slave system concept: we look for predicate transformers
with abstracting context
and for specifications V 2 SPEC i+m
k+n and W 2 SPEC n+k
m+i where the refining
context and the abstracting context are specified as follows:
y
z
Figure
9: Graphical representation of the cooperator
and the following requirement is fulfilled:
We give an analysis of this requirement based on further form of composition
called a cooperator. The cooperator is denoted by
. For
specifications
m+k the cooperator is defined as
follows:
m+m
where
A graphical presentation of the cooperator is given in Figure 9.
A straightforward rewriting shows that the cooperator is indeed a generalization
of the master/slave. For H 2 SPEC k
In particular we obtain:
and therefore the condition:
reads as follows:
The following theorem gives an analysis for the component W
Theorem 10 The implication
implies
Recall,
just swaps its input streams.
Proof: By the definition of cooperation we may conclude that for every function
i and every function - such that W:i and V:- and for every f where X:f there
exists a function e
f where X: e
f such that:
f:x
this formula is true for all specifications X and therefore also for definite
specifications, the formula holds for all functions f where in addition
f .
We obtain for the constant function f with z = f:x for all x and for all z:
The equation above therefore simplifies to
Now we prove that from this formula we can conclude:
We do the proof by contradiction. Assume there exists x such
and x 6= z. Then we can choose a function f such that f:x 6= f:z. This concludes
the proof of the theorem. 2
By the concept of refining contexts we then may consider the refined system
QbW bV bHeee
The refinement of this refined network can then be continued by refining V bHe
and leaving its environment QbW b:::ee as it is.
There is a remarkable relationship between master/slave systems and the
system structures studied in rely/guarantee specification techniques as advocated
among others in [Abadi, Lamport 90]. The master can be seen as the
environment and the slave as the system. This indicates that the master/slave
situation models a very general form of composition. Every net with a subnet
H can be understood as a master/slave system QbHe where Q denotes the surrounding
net, the environment, of H. This form of networks is generalized by
the cooperator as a composing form, where in contrast to master/slave systems
the situation is fully symmetric.
The cooperating components Q and Q in Q
can be seen as their
mutual environments. The concept of cooperation is the most general notion of
a composing form for components. All composing forms considered so far are
just special cases of cooperation; for
k we obtain:
Let a net N be given with the set \Gamma of components. Every partition of \Gamma into
two disjoint sets of components leads to a partition of the net into two disjoint
subnets say Q and Q such that the net is equal to Q
the number of channels in N leading from Q to Q and k denotes the number
of channels leading from Q to Q. Then both subnets can be further refined
independently.
8 Conclusion
The notion of compositional refinement depends on the operators, the composing
forms, considered for composing a system. Compositionality is not a goal per
se. It is helpful for performing global refinements by local refinements. Refining
contexts, master slave systems and the cooperator are of additional help for
structuring and restructuring a system for allowing local refinements.
The previous sections have demonstrated that using functional techniques a
compositional notion of interaction refinement is achieved. The refinement of
the components of a large net can be mechanically transformed into a refinement
of the entire net.
Throughout this paper only notions of refinement have been treated that can
be expressed by continuous representation and abstraction functions. This is
very much along the lines of [CIP 84] and [Broy et al. 86] where it is considered
as an important methodological simplification, if the abstraction and representation
functions can be used at the level of specified functions. There are interesting
examples of refinement, however, where the representation functions are
not monotonic (see the representation functions obtained by the introduction
of time in [Broy 90]). A compositional treatment of the refinement of feedback
loops in these cases remains as an open problem.
Acknowledgement
This work has been carried out during my stay at DIGITAL
Equipment Corporation Systems Research Center. The excellent working
environment and stimulating discussions with the colleagues at SRC, in particular
Jim Horning, Leslie Lamport, and Mart'in Abadi are gratefully acknowledged.
I thank Claus Dendorfer, Leslie Lamport, and Cynthia Hibbard for their careful
reading of a version of the manuscript and their most useful comments.
A
Appendix
Full Abstraction
Looking at functional specifications one may realize that sometimes they specify
more properties than one might be interested in and that one may observe
under the considered compositional forms. Basically we are interested in two
observations for a given specification Q for a function f with Q:f and input
streams x. The first one is straightforward: we are interested in the output
streams y where
But, in addition, for controlling the behavior of components especially within
feedback loops we are interested in causality. Given just a finite prefix. e
x of the
considered input streams x, causality of input with respect to output determines
how much output (which by monotonicity of f is a prefix of y) is guaranteed by
f .
More technically, we may represent the behavior of a system component by
all observations about the system represented by pairs of chains of input and
corresponding output streams.
A set fx is called a chain, if for all i 2 IN we have
Given a specification Q 2 SPEC n
m , a pair of chains
is called an observation about Q, if there exists a function f with Q:f such that
for all
and
The behavior of a system component specified by Q then can be represented
by all observations about Q. Unfortunately, there exist functional specifications
which show the same set of observations, but, nevertheless, characterize different
sets of functions. For an example we refer to [Broy 90].
Fortunately such functional specifications can be mapped easily onto functional
specifications where the set of specified functions is exactly the one characterized
by its set of observations. For this reason we introduce a predicate
transformer
that maps a specification on its abstract counterpart. This predicate transformer
basically constructs for a given predicate Q a predicate \Delta:Q that is fulfilled
exactly for those continuous functions that can be obtained by a combination
of the graphs of functions from the set of functions specified by Q. We define
where
By this definition we obtain immediately the monotonicity and the closure property
of the predicate transformer \Delta.
Theorem 11 (Closure property of the predicate transformer \Delta)
Proof: Straightforward, since Q:f occurs positively in the definition of \Delta:Q,
specification Q is called fully abstract, if
We may redefine our compositional forms such that the operators deliver always
fully abstract specifications:
All the results obtained so far carry over to the abstract view by the monotonicity
of \Delta, and by the fact that we have
Furthermore, given an upward closed predicate transformer - we have: if Q is
the least solution of
then \Delta:Q is the least solution of
The proof is straightforward. Note, by this concept of abstraction we may obtain
I
in cases where I ) A; R does not hold. This allows additional simplifications
of network refinements.
Note, full abstraction is a relative notion. It is determined by the basic
concept of observability and the composing forms. In the presence of refinement
it is unclear whether full abstraction as defined above is appropriate. We have:
However, if a component Q is used twice in a network - [Q], then we do not have,
in general, that for (determined) refinements e
Q of \Delta:Q there exist (determined)
refinements b
Q of Q such that:
Therefore, when using more sophisticated forms of refinement the introduced
notion of full abstraction might not always be adequate.
--R
Adding Action Refinement to a Finite Process Algebra.
Composing Specifications.
Refinement Calculus
Refinement Calculus
Stepwise Refinement of Distributed Systems.
Scenarios: A Model of Nondeterminate Computation.
Algebraic implementations preserve program correctness.
Functional Specification of Time Sensitive Communicating Systems.
Algebraic methods for program construc- tion: The project CIP
Parallel Program Design: A Foundation.
Assertional Data Reification Proofs: Survey and Perspective.
Action Systems and Action Refinement in the Development of Parallel Systems - An Algebraic Approach
Specifying concurrent program modules.
Proofs of Correctness of Data Repre- sentations
Systematic Program Development Using VDM.
A Survey of Formal Software Development Methods.
Bisimulation and Action Refinement.
--TR
Algebraic implementations preserve program correctness
Systematic software development using VDM
Parallel program design: a foundation
Refinement calculus, part I: sequential nondeterministic programs
Refinement calculus, part II: parallel and reactive programs
Functional specification of time sensitive communicating systems
A logical view of composition and refinement
Adding action refinement to a finite process algebra
Bisimulation and action refinement
Composing specifications
Specifying Concurrent Program Modules
Scenarios
--CTR
Bernhard Thalheim, Component development and construction for database design, Data & Knowledge Engineering, v.54 n.1, p.77-95, July 2005
Manfred Broy, Object-oriented programming and software development: a critical assessment, Programming methodology, Springer-Verlag New York, Inc., New York, NY,
Bernhard Thalheim, Database component ware, Proceedings of the fourteenth Australasian database conference, p.13-26, February 01, 2003, Adelaide, Australia
Antje Dsterhft , Bernhard Thalheim, Linguistic based search facilities in snowflake-like database schemes, Data & Knowledge Engineering, v.48 n.2, p.177-198, February 2004
Einar Broch Johnsen , Christoph Lth, Abstracting refinements for transformation, Nordic Journal of Computing, v.10 n.4, p.313-336, December
Manfred Broy, Toward a Mathematical Foundation of Software Engineering Methods, IEEE Transactions on Software Engineering, v.27 n.1, p.42-57, January 2001
Rimvydas Rukenas, A rigorous environment for development of concurrent systems, Nordic Journal of Computing, v.11 n.2, p.165-193, Summer 2004
Marco Antonio Barbosa, A refinement calculus for software components and architectures, ACM SIGSOFT Software Engineering Notes, v.30 n.5, September 2005 | specification;interactive systems;refinement |
269971 | Making graphs reducible with controlled node splitting. | Several compiler optimizations, such as data flow analysis, the exploitation of instruction-level parallelism (ILP), loop transformations, and memory disambiguation, require programs with reducible control flow graphs. However, not all programs satisfy this property. A new method for transforming irreducible control flow graphs to reducible control flow graphs, called Controlled Node Splitting (CNS), is presented. CNS duplicates nodes of the control flow graph to obtain reducible control flow graphs. CNS results in a minimum number of splits and a minimum number of duplicates. Since the computation time to find the optimal split sequence is large, a heuristic has been developed. The results of this heuristic are close to the optimum. Straightforward application of node splitting resulted in an average code size increase of 235% per procedure of our benchmark programs. CNS with the heuristic limits this increase to only 3%. The impact on the total code size of the complete programs is 13.6% for a straightforward application of node splitting. However, when CNS is used, with the heuristic the average growth in code size of a complete program dramatically reduces to 0.2% | Introduction
In current computer architectures improvements can be obtained by the exploitation of instruction level parallelism
(ILP). ILP is made possible due to higher transistor densities which allows the duplication of function units
and data paths. Exploitation of ILP consists of mapping the ILP of the application onto the ILP of the target architecture
as efficient as possible. This mapping is used for Very Long Instruction Word (VLIW) and superscalar
architectures. The latter are used in most workstations. These architectures may execute multiple operations per
cycle. Efficient usage requires that the compiler fills the instructions with operations as efficient as possible. This
process is called scheduling.
Problem statement: In order to find sufficient ILP to justify the cost of multiple function units and data paths, a
scheduler should have a larger scope than a single basic block at a time. A basic block is a sequence of consecutive
statements in which the flow of control enters at the beginning and leaves always at the end. Several scheduling
scopes can be found which go beyond the basic block level [1]. The most general scope currently used is called a
region [2]. This is a set of basic blocks that corresponds to the body of a natural loop. Since loops can be nested,
regions can also be nested in each other. Like natural loops, regions have a single entry point (the loop header)
and may have multiple exits [2]. In [1] a speedup over 40% is reported when extending the scheduling scope to a
region, the problem of region scheduling is that it requires loops in the control flow graph with a single entry point.
These flow graphs are called reducible flow graphs. Fortunately, most control flow graphs are reducible, nevertheless
the problem of irreducible flow graphs cannot be ignored. To exploit the benefits of region scheduling,
irreducible control flow graphs should be converted to reducible control flow graphs.
Exploiting ILP also requires efficient memory disambiguation. To accomplish this the nesting of loops must be
determined. Since in an irreducible flow graph the nesting of loops is not clear, memory disambiguation techniques
cannot directly be applied to these loops. To exploit the benefits of memory disambiguation, irreducible
control flow graphs should be converted to reducible control flow graphs as well. Another pleasant property of
reducible control flow graphs is the fact that data flow analysis, that is an essential part of any compiler, can be
done more efficiently [3].
Related work: The problem of converting irreducible flow graphs to reducible flow graphs can be tackled at
the front-end or at the back-end of the compiler. In [4] and [5] methods for normalizing the control flow graph
of a program at the front-end are given. These methods rewrite an intermediate program in a normalized form.
During normalization irreducible flow graphs are converted to reducible ones. To make a graph reducible, code
has to be duplicated, which results in a larger code size. Since the front-end is unaware of the precise number of
machine instructions needed to translate a piece of code, it is difficult to minimize the growth of the code size.
Another approach is to convert irreducible flow graphs at the back-end. The advantage is that when selecting what
(machine)code to duplicate one can take the resulting code size into account. Solutions for solving the problem
at the back-end are given in [6, 7, 8, 9]. The solution given by Cocke and Miller [6, 9] is very time complex
and it does not try to minimize the resulting code size. The method described by Hecht et al. [7, 8] is even more
inefficient in the sense of minimizing the code size, but it requires less analysis. In this paper a new method for
converting irreducible flow graphs at the back-end is given which is very efficient in terms of the resulting code
size.
Paper overview: In section 2 reducible and irreducible flow graphs are defined and a method for the detection of
irreducible flow graphs is discussed. The principle of node splitting and the conversion method described by Hecht
et al., which is a straightforward application of node splitting, are given in section 3. Our approach, Controlled
Node Splitting (CNS), is described in section 4. All known conversion methods convert irreducible flow graphs
without minimizing the number of copies. With controlled node splitting it is possible to minimize the number of
copies. Unfortunately this method requires much CPU time; therefore we developed a heuristic that reduces the
CPU time but still performs close to the optimum. This heuristic and the algorithms for controlled node splitting
are presented. The results of applying CNS to several benchmarks are given in section 5. Finally the conclusions
are given in section 6.
Irreducible Flow Graphs
The control flow of a program can be described with a control flow graph. A control flow graph consists of nodes
and edges. The nodes represent a sequence of operations or a basic block, and the edges represent the flow of
control.
Definition 2.1 The control flow graph of a program is a triple where (N; E) is a finite directed
graph, with N the collection of nodes and E the collection of edges. From the initial node s 2 N there is a path
to every node of the graph.
Figure
1 shows an example of a control flow graph with nodes
and initial node s.
s
a
c
d
e
f
s
a
c
d
e
f
(a) (b)
Figure
1: a) A reducible control flow graph. b) The graph
As stated in the introduction, finding sufficient ILP requires as input a reducible flow graph. Many definitions for
reducible flow graphs are proposed. The one we adopt is given in [8] and is based on the partitioning of the edges
of a control flow graph G into two disjoint sets:
1. The set of back edges BE consist of all edges whose heads dominate their tails.
2. The set of forward edges FE consists of all edges which are not back edges, thus
A node u of a flow graph dominates node v, if every path from the initial node s of the flow graph to v goes through
u. The dominance relations of figure 1 are: node s dominates all nodes, node a dominates all nodes except node
s, node c dominates nodes c, d, e, f and node d dominates nodes d, e, f . Therefore
f)g. The definition of a reducible flow graph is:
Definition 2.2 A flow graph G is reducible if and only if the flow graph is acyclic and every
node n 2 N can be reached from the initial node s.
The control flow graph of figure 1 is reducible since acyclic. The flow graph of figure 2
however is irreducible. The set of back edges is empty, because neither node a nor node b, dominates the other.
FE is equal to f(s; a) s) is not acyclic.
From definition 2.2 we can derive that if a control flow graph G is irreducible then the graph
contains at least one loop. These loops are called irreducible loops. To remove irreducible loops, they must be
s
a b
Figure
2: The basic irreducible control flow graph.
detected first. There are several methods for doing this. One of them is to use interval analysis [10, 11]. The
method used here is the Hecht-Ullman T1-T2 analysis [12, 3]. This method is based on two transformations
and T2. These transformations are illustrated in figure 3 and are defined as:
Definition 2.3 Let be a control flow graph and let u 2 N . The removes
the edge (u; u) 2 E, which is a self-loop, if this edge exists. The derived graph becomes G
In short G T 1(u)
T1(u) u
Figure
3: The
Definition 2.4 Let be a control flow graph and let node v 6= s have a single predecessor u. The
transformation is the consumption of node v by node u. The successor edges of node v become successor edges
of node u. The original successor edges of node u are preserved except for the edge to node v. If I is the set of
successor nodes of v then the derived graph G
In short G T 2(v)
Definition 2.5 The graph that results when repeatedly applying the transformations in any possible
order to a flow graph, until a flow graph results for which no application of T1 or T2 is possible is called the limit
flow graph. This transformation is denoted as
In [7] it is proven that the limit flow graph is unique and independent of the order in which the transformations
are applied.
Theorem 2.6 A flow graph is reducible if and only if after repeatedly applying transformations in any
particular order the flow graph can be reduced into a single node.
The proof of this theorem can be found in [12]. An example of the application of the transformations
is given in figure 4. The flow graph from figure 1 is reduced to a single node, so we can conclude that this flow
graph is reducible.
f
s
a
c
a
c
T2(c) s
a
s
a
s
T2(a)
T1(a)
s
a
c
e
f
e
s
a
c
d
f
T2(d) T2(e)
s
a
c
d
f
e
a
f
c
T1(c)
Figure
4: An example of application of the
If after applying transformations the resulting flow graph consists of multiple nodes, the graph is irre-
ducible. The transformations T1 and T2 not only detect irreducibility but they also detect the nodes that causes
the irreducibility. Examples of irreducible graphs are given in figure 5. From theorem 2.6 it follows that we can
alternatively define irreducibility by:
Corollary 2.7 A flow graph is irreducible if and only if the limit flow graph is not a single node 1 .
Another definition, which is more intuitive is that a flow graph is irreducible if it has at least one loop with multiple loop entries [12].
s
a b c
s
a b c
s
a
s
a b
c
(a) (b) (c) (d)
Figure
5: Examples of extensions of the basic irreducible control flow graph of figure 2.
3 Flow Graph Transformation
If a control flow graph occurs to be irreducible, a graph transformation technique can be used to obtain a reducible
control flow graph. In the past some methods are given to solve this problem [6, 7, 8]. Most methods for converting
an irreducible control flow graph are based on a technique called node splitting. In section 3.1 this technique
to reduce an irreducible flow graph is described. Section 3.2 shows how node splitting can be applied straightforwardly
to reduce an irreducible graph.
3.1 Node Splitting
Node Splitting is a technique that converts a graph G 1 to an equivalent graph G 2 . We assign a label to each node
of a graph; the label of node x i is denoted label Duplication of a node creates a new node with the same
label. An equivalence relation between two flow graphs is derived from Hecht [7] and given below.
Definition 3.1 If path in a flow graph, then define Labels(P ) to be a sequence of labels
corresponding to this path; that is, Labels(P Two flow graphs G 1 and G 2 are
equivalent if and only if, for each path P in G 1 , there is a path Q in G 2 such that Labels(P
conversely.
According to this definition the two flow graphs of figure 6 are equivalent. Note that all nodes a have the same
label(a). Node splitting is defined as:
3.2 Node splitting is a transformation of a graph G into a graph G
that a node n 2 N , having multiple predecessors p i is split; for any incoming edge (p i ; n) a duplicate n i of n is
made, having one incoming edge (p and the same outgoing edges as n. N 0 is defined as N
is a successor node of n. This transformation is
denoted as G 1
is the splitting of node n 2 N .
The principle of node splitting is illustrated in figure 6, node a of graph G 1 is split. Note that if a node n is split in
the limit graph, then it is the corresponding node n in the original graph that must be split to remove irreducibility.
Theorem 3.3 The equivalence relation between two graphs is preserved under the transformation G 1
a b a b
a
S(a)
Figure
simple example of applying node splitting to node a.
Proof: We show that node splitting transforms any graph G 1 into an equivalent split graph G 2 . Assume graph
G 1 has a node v with n!1 predecessors u i and with m 0 successors w k , as shown in figure 7a. The set of
Labels(P ) for all paths P of a graph G is denoted with LABELS (G). With the label notation all paths of graph
G 1 of figure 7a are described
k=0 f(label
label (v) ; label (w k ))g
(a) (b)
s
s
Figure
7: Two equivalent graphs, (a) before node splitting, (b) after node splitting.
If node v is split in n copies named v i the split graph G 2 results. The set of all paths of graph G 2 is:
k=0 f(label
label (v i
This graph is given in figure 7b. Since label (v i label (v) every path in G 2 exists also in G 1 and conversely.
This leads to the conclusion that the graphs G 1 and G 2 are equivalent. Since in figure 7 we split a node with an
arbitrary number of incoming and outgoing edges we may in general conclude that splitting a node of any graph
results in an equivalent graph. Using the same reasoning it will be clear that the equivalence relation is transitive,
splitting a finite number of nodes in either the original graph or any of its equivalent graphs results in a graph
which is equivalent to the original graph. 2
The name node splitting is deceptive because it suggests that the node is split in different parts but in fact the node
is duplicated.
3.2 Uncontrolled Node Splitting
The node splitting transformation technique can be used to convert an irreducible control flow graph into a reducible
control flow graph. From Hecht [7] we adopt theorem 3.4.
Theorem 3.4 Let S denote the splitting of a node, and let T denote some graph reduction transformation (e.g.
any control flow graph can be transformed into a single node by the transformation
represented by the regular expression T (ST ) .
The proof of the theorem is given in [7].
Hecht et al. describe a straightforward application of node splitting to reduce irreducible control flow graphs. This
method selects a node for splitting from the limit graph if the node has multiple predecessors. The selected node
is split into several identical copies, one for each entering edge. This approach has the advantage that it is rather
simple, but it has the disadvantage that it can select nodes that did not have to be split to make a graph reducible.
In figure 8a we see that the nodes a, b, c and d are candidate nodes for splitting. In figure 8b node d is split, the
number of nodes reduces after the application of two T2 transformations, but the graph is still irreducible. Splitting
of node a neither makes the graph reducible, see figure 8c. Only splitting of node b or c converts the graph into a
reducible control flow graph, see figure 8d.
Although this method does inefficient node splitting, it does transform an irreducible control flow graph eventually
in a reducible one. The consequence of this inefficient node splitting is that the number of duplications becomes
unnecessarily large.
4 Presentation of Controlled Node Splitting
The problem of existing methods is that the resulting code size after converting an irreducible graph can grow
uncontrolled. Controlled Node Splitting (CNS) controls the amount of copies which results in a smaller growth of
the code size. CNS restricts the set of candidate nodes for splitting. First we introduce the necessary terminology:
Definition 4.1 A loop in a flow graph is a path (n is an immediate successor of n k . The set of
nodes contained in the loop is called a loop-set.
In figure 8a fa; bg ; fb; cg and fa; b; cg are loop-sets.
Definition 4.2 An immediate dominator of a node u, ID(u), is the last dominator on any path from the initial node
s of a graph to u, excluding node u itself.
b'
s
a
d
s
a
d d'
c
a a
a'
d d
a. Original irreducible graph. b. Splitting node .
c. Splitting node .
d. Splitting node .
d
a b
Figure
8: Examples of node splitting.
In figure 1 node a dominates the nodes a, b, c, d, e and f , but it immediate dominates only the nodes b and c.
Definition 4.3 A Shared External Dominator set (SED-set) is a subset of a loop-set L with the properties that
it has only elements that share the same immediate dominator and the immediate dominator is not part of the
loop-set L. The SED-set of a loop-set L is defined as:
Definition 4.4 A Maximal Shared External Dominator set (MSED-set) K is defined as:
SED-set K is maximal ,
6 9 SED-set M, K ae M
The definition says that an MSED-set cannot be a proper subset of another SED-set. In figure 5a multiple SED-sets
can be identified like fa; bg, fb; cg and fa; b; cg. But there is only one MSED-set: fa; b; cg.
Definition 4.5 Nodes in an SED-set of a flow graph can be classified into three sets:
ffl Common Nodes (CN): Nodes that dominate other SED-set(s) and are not reachable from the SED-set(s)
they dominate.
ffl Reachable Common nodes (RC): Nodes that dominate other SED-set(s) and are reachable from the SED-
they dominate.
ffl Normal Nodes (NN): Nodes of an SED-set that are not classified in one of the above classes. These nodes
dominate no other SED-sets.
In the initial graph of figure 9a we can identify the MSED-sets fa; bg and (c; dg. The nodes a, c and d are elements
of the set NN and node b is an element of the set RC. If the edge (c; b) was not present then node b would be an
element of the set CN. Note that loop (b; c) is not a SED-set.
Theorem 4.6 A SED-set(L) has one node , The corresponding loop L has a single header and is reducible.
The proof of this theorem can be derived from [7]. An example of an SED-set which has one node is the graph in
figure 4 just before the transformation
In section 4.1 a description of CNS is given. It treats a method for minimizing the number of nodes to split. Section
4.2 gives a method for minimizing the amount of copies. The number of copies is not equal to the number of
splits because a split creates for every entering edge a copy. If a node has n entering edges then one split creates
copies. To speed up the process for minimizing the amount of copies a heuristic is given. The algorithms
implementing this heuristic are given in section 4.3.
4.1 Controlled Node Splitting
All nodes of an irreducible limit graph, except the initial node s of the graph, are possible candidates for node
splitting since they have at least two predecessors. However splitting of some nodes is not efficient; see section 3.2.
CNS minimizes the number of splits. To accomplish this, two restrictions are made to the set of candidate nodes.
These restrictions are:
1. Only nodes that are elements of an SED-set are candidates for splitting.
2. Nodes that are elements of RC are not candidates for splitting.
The first restriction prevents the splitting of nodes that are not in an SED-set. Splitting such a node is inefficient
and unnecessary. An example of such a split was shown in figure figure 8b (the only SED-set in figure 8b is fb; cg).
The second restriction is more complicated. The impact of this restriction is illustrated in figure 9. This figure
shows two different sequences of node splitting. The initial graph of the figure is a graph on which T has been
applied. In figure 9a there are three splits needed and in figure 9b only two. In figure 9a node b is split, this node
however is an element of the set RC. The second restriction prevents a splitting sequence as the one in figure 9a.
Node splitting with the above restrictions, alternated with eventually result in a
single node. This can be seen easily. Every time a node that is an element of an SED-set is split, it is reduced by the
transformation and the number of nodes involved in SED-sets decreases with one. Since we are considering
flow graphs with a finite number of nodes, a single node eventually remains.
s
a b
c d
s
s
c d
a. Node splitting sequence of three nodes.
b. Node splitting sequence of two nodes.
s
a
c d
s
a b
c d
s
s
a b
Figure
9: Graph with two different split graphs.
Theorem 4.7 The minimum number of splits needed to reduce an MSED-set with k nodes is given by:
Proof: Every time a node is split and T is applied, the number of nodes in the MSED-set decreases with one. For
every predecessor of the node to split a duplicate is made, this means that every duplicate has only one predecessor
and all the duplicates can be reduced by the T2 transformation. This results in an MSED-set with one node less
than the original MSED-set. To reduce the complete MSED-set all nodes but one of the MSED-set must be split
until there is only one node left. This results in k-1 splits. 2
Theorem 4.8 The minimum number of splits needed to convert an irreducible graph, with n MSED-sets, into a
reducible graph is given by:
where T splits is the total number of splits, and k i is the number of nodes of MSED-set i.
Proof: The proof consists of multiple parts, first some related lemmas are proven.
Lemma 4.9 All MSED-sets are disjoint, that is there are no two MSED-sets that share a node.
Proof: If a node is shared by two MSED-sets then this node must have two different immediate domina-
tors. This conflicts however with the definition of an immediate dominator as given in 4.2. 2
Since the MSED-sets are disjoint the number of splits of the individual MSED-sets can be added. If however
splitting nodes results in merging MSED-sets this result does not hold anymore. Therefore we have to prove that
CNS does not merge MSED-sets and that merging MSED-sets does not lead to less splits.
Lemma 4.10 Splitting a node that is part of an MSED-set and is not in RC does not result in merging
MSED-sets.
Proof: First we shall prove that splitting a node that is an element of RC merges MSED-sets. Afterwards
we prove that splitting of nodes that are elements of CN or NN do not merge MSED-sets.
- Splitting a RC node merges two MSED-sets. Consider the graph of figure 10. Suppose that subgraphs
G 1 and G 2 are both MSED-sets. The nodes of both subgraphs form a joined loop because it is possible
s
s
x
y y
Figure
10: Merging of two MSED-sets.
to go from G 1 to G 2 and vice-versa. The reason that both subgraphs do not form a single MSED-set
is the fact that they have different immediate dominators. By splitting a node that is in RC, in this case
node x, and applying T to the complete graph the immediate dominator of subgraph G 1 becomes also
the immediate dominator of subgraph G 2 . Since the subgraphs add up to a single loop and share the
same immediate dominator the MSED-sets are merged. This holds also in the general case where x
dominates and is reachable by n MSED-sets.
Splitting nodes that are not in RC do not merge MSED-sets. There are now two types of nodes left
that are candidates for splitting, these are the nodes of the sets NN and CN.
Splitting nodes that are element of the set NN do not merge MSED-sets. These nodes do not have
edges that go to other MSED-sets, therefore splitting of these nodes does not affect the edges from
one MSED-set to another and therefore the splitting will never result in merging MSED-sets.
Splitting nodes that are element of the set CN do not merge MSED-sets. These nodes do not form
a loop with the MSED-set it dominates. By splitting such a node the nodes of both MSED-sets
get the same immediate dominator but there is no loop between the MSED-sets and therefore are
not merged.
Lemma 4.11 Reducing two merged MSED-sets results in more splits to reduce a graph than reducing the
MSED-sets separately.
Proof: Suppose SED-set 1 consists of x nodes and SED-set 2 has y nodes. Merging them costs one split
since the RC node must be split. Reducing the resulting SED-set which has now x
splits. The total number of splits is x \Gamma 1+y. Reducing the two SED-sets separately results
in splits. This is one split less than the splits needed when merging the SED-sets. 2
The combination of lemmas 4.10 and 4.11 justifies the restriction to prevent the splitting of nodes that are elements
of RC.
Lemma 4.12 There exists always a node in an irreducible graph that is part of an MSED-set but that is not
an element of RC.
Proof: If all nodes of all MSED-sets are elements of RC then these nodes must dominate at least two other
nodes because a node cannot dominate its own dominators. These nodes are also elements of an MSED-set
and of RC. The graph therefore must have an infinite number of nodes. Since we are considering graphs
with a finite number of nodes there must be a node that is part of an MSED-set but that is not an element of
Since MSED-sets are disjoint and our algorithm can always find a node that can be split without merging MSED-
sets the result of equation 1 holds. 2
Example 4.13 In figure 9 the MSED-sets fa; bg and fc; dg can be identified. They have both two nodes. This
results in a minimal number of splits needed to reduce the graph.
4.2 Minimizing the amount of copies
In the previous section we saw that the algorithm minimizes the number of splits, but this does not result in a
minimum number of copied instructions or basic blocks. In the following the quantity to minimize is denoted
with Q, Q (n) means the quantity of node n, Q(G) is the quantity of a graph G and is defined as:
The purpose of CNS is to minimize Q(G ), where G is the transformation of G into a single node using some
sequence of splits, more formally Q (G
Two conditions must be satisfied to achieve this minimum:
1. The freedom of selecting nodes to split must be as big as possible. Notice that the number of splits is also
minimized if we prevent the splitting of all nodes that dominate another MSED-set, that is prevent splitting
of nodes that are elements of RC and CN. But this has the disadvantage that we lose some freedom in selecting
nodes. This loss of freedom is illustrated in figure 11. Suppose that the nodes contain a number of
s
a b
c d
Figure
11: A graph that has a common node that is not in the set RC.
instructions and that we want to minimize the total resulting code size, which means that we would like to
copy as less instructions as possible. The number of copied instructions if we prevent splitting nodes that
are elements of RC and CN is: Q (a) If we only prevent the splitting of nodes that
are element of RC the number of copied instructions is: min (Q (a) ; If the
number of instructions in node b is less than in node a then the number of copied instructions is less in the
latter case. Thus keeping the set of candidate nodes as big as possible pays off if one would like to minimize
the amount of copies.
2. The sequence of splitting nodes must be chosen optimal. There exists multiple split sequences to solve an
irreducible graph. A tree can be build to discover them all. Figure 12 shows a flow graph and the tree with
all possible split sequences. The nodes of this tree indicate how many copies are introduced by the split. The
s
a b c
a 2b c
c b+a
c+b
a+b
a c
b+c
a
a
a
Figure
12: An irreducible graph with its copy tree.
edges give the split sequence. The number of copies can be found by following a path from the root to a leaf
and adding the quantities of the nodes. Suppose that the nodes contain a number of instructions and that we
want to minimize the total resulting code size, which means that we would like to copy as less instructions as
possible, then we can choose from 6 different split sequences with 5 different numbers of copies. The minimum
number of copied instructions is: min(Q (a
The problem is to pick a split sequence that minimizes the number of copied instructions.
Theorem 4.14 Minimizing the resulting Q(G ) of an irreducible graph that is converted to a reducible graph
requires a minimum number of splits, where G is a single node; that is the totally reduced graph. In short:
splits to produce G is minimal.
Proof: Suppose all nodes of a limit flow graph, except the initial node s, are candidates for splitting, then nodes
that are not in an MSED-set and nodes that are elements of RC are also candidates. Splitting a node of one of
these categories results in a number of splits that is greater than the minimal number of splits. If we can prove
that splitting these nodes always result in a Q(G ) that is greater than the one we obtain if we exclude these nodes,
then we have proven that a minimum number of splits is required in order to minimize Q(G ).
ffl Splitting a node that is not in an MSED-set cannot result in the minimum Q(G ).
As seen in the previous section splitting of nodes that are not in an MSED-set do not make a graph more
reducible since splitting these nodes does not decrease the number of nodes in an MSED-set. This means
that the MSED-set still needs the same number of splits.
ffl Splitting nodes that are element of RC cannot result in the minimum Q(G ).
s
G
a
s
a
sag
s
ag
s
G
a
sa
ga
saga
sa
Ga
a. Splitting nodes that are not in the set RC.
b. Splitting a node that is in the set RC.
Figure
13: The influence on the number of copies by splitting a RC node.
Consider the graph of figure 13. In this figure the subgraph G has at least one MSED-set, otherwise the
graph would not be irreducible. Figure 13a shows the reduction of a graph in the case that splitting of a RC
node is not allowed and in 13b splitting of such a node is allowed. The node g is the reduced subgraph G
and the notation sa in a node means that node s has consumed a copy of node a. The resulting quantity of
the node is the sum of the quantities of nodes s and a. As we can see the resulting total quantity of the split
sequence of figure 13a is Q(s)+Q(a)+Q(g) and the resulting total quantity of the reduced graph of figure 13b
is Q(s)+2Q(a)+Q(g). Without loss of generality we can conclude that splitting a node that is in RC never
can lead to the minimum total quantity. 2
As one can easily see, the more nodes in MSED-sets the larger the tree and the number of possible split sequences
increases. It takes much computation time to compute all possibilities, therefore a heuristic is constructed which
picks a node n i to split with the smallest H (n i ) as defined by:
The results of this heuristic, compared to the best possible split sequence, are given in section 5.
4.3 Algorithms
The method described in the previous sections detects an irreducible control flow graph and converts it to a reducible
control flow graph. In this section the algorithm for this method is given. The algorithm consists of three
parts (1) the T1 and T2 transformations, (2) the selection of a candidate node and (3) the splitting of a node.
Algorithm 4.1 Controlled Node Splitting
Input : The control flow graph of a procedure.
: The reducible control flow graph of the procedure.
(1) Copy the flow graph of basic blocks to a flow graph G of nodes
(2) Apply repeatedly T1-T2 transformations to G
(3) while G has more than one node do
selection
(5) Split candidate node
Apply repeatedly T1-T2 transformations to G
Algorithm 4.1 expects as input a control flow graph of basic blocks. The structure of this flow graph is copied to
a flow graph of nodes (1). Now we have two different flow graphs: a flow graph of basic blocks and a flow graph
of nodes. This means that initially every node represents a basic block. Every duplicate introduced by splitting
the flow graph of nodes is also performed in the flow graph of basic blocks. After the graph is copied the
and T2 transformations are applied till the graph of nodes does not change any more (2). If the graph of nodes
is reduced to a single node, the graph is reducible and no splitting is needed. However if there remain multiple
nodes, node splitting must be applied. First a node for splitting is selected (4). This is done with algorithm 4.2
that is discussed later. The selected node is then split (5) as defined in definition 3.2. In the graph of basic blocks,
the corresponding basic blocks are copied also. After splitting the T1 and T2 transformations are applied again
on the graph of nodes (6). When there is still more than one node left the process start over again. The algorithm
terminates if the graph of nodes is reduced to a single node and thus the graph of basic blocks is converted to a
reducible flow graph.
The algorithms for the transformations and for node splitting are quite straightforward and not given
here. Algorithm 4.2 selects a node for splitting. Initially every node is a candidate. A node is rejected as a candidate
if it does not fulfill the restrictions (3) as discussed in subsection 4.1. For all nodes that fulfill these restrictions
the heuristic is calculated (4) with equation 1. The node with the smallest heuristic is selected for splitting.
The goal of our experiments is to measure the quality of controlled node splitting in the sense of minimizing the
amount of copies. In the experiments four methods for node splitting are used:
Optimal Node Splitting, ONS. This method computes the best possible node split sequence with respect to
the quantity to minimize; the number of basic blocks or the number of instructions. This algorithm however
Algorithm 4.2 Node selection
Input : The control flow graph of nodes.
node for splitting.
(2) for all nodes n do
(3) if n in an SED-set and n not in RC then
Calculate value H(n)
return candidate node
requires a lot of computation time (up to several days on a HP735 workstation).
Uncontrolled Node Splitting, UCNS. A straightforward application of node splitting, no restrictions are
made to the set of nodes that are candidate for splitting.
Controlled Node Splitting, CNS. Node splitting with the restrictions discussed in section 4.1.
Controlled Node Splitting with Heuristic, CNSH. The same method as CNS but now a heuristic is used to
select a node from the set of candidate nodes.
The algorithms are applied to a selective group of benchmarks. These benchmarks are procedures with an irreducible
control flow graph and are obtained from the real world programs: a68, bison, expand, gawk, gs, gzip,
sed, tr. The programs are compiled with the GCC compiler which is ported to a RISC architecture 2 . The amount
of copies of two different quantities are considered. In table 1 the number of copies of basic blocks are listed and
in table 2 the number of copied instructions. The reported results of the methods UCNS, CNS and CNSH are the
averages of all possible split sequences.
The first column in the tables 1 and 2 lists the procedure name, with the program name in parentheses. The second
column gives the number of basic blocks or instructions of the procedure before an algorithm is applied. The other
columns give the number of copies that result from the algorithms. The absolute number of copies is given and a
percentage that indicates the growth of the quantity with respect to the original quantity is given.
From the results of the ONS method we can conclude that node splitting does not have to lead to an excessive
number of copies. Furthermore we can conclude that CNS outperforms UCNS. UCNS can lead to an enormous
amount of copies, the average percentage of growth in basic blocks is 241.7% and in code size it is 235.5%. CNS
performs better, a growth of 26.2% for basic blocks and 30.1% for the number of instruction, but there is still a big
gap with the optimal case. When using the heuristic, controlled node splitting performs very close to the optimum.
The average growth in basic blocks for the methods CNSH and ONS is respectively 3.1% and 3.0%. The growth
We used a RISC like MOVE architecture. The MOVE project [13, 1] researches for the generation of application specific processors
(ASPs) by means of Transport Triggered Architectures (TTA).
Table
1: The number of copied basic blocks.
atof
output program(bison) 14 2 (14%) 9.7 (69%) 9.7 (69%) 2.0 (14%)
copy definition(bison) 119 2 (2%) 417.0 (350%) 27.7 (23%) 2.0 (2%)
copy guard(bison) 190 4 (2%) 2273.5 (1197%) 133.4 (70%) 4.0 (2%)
copy action(bison)
next file(expand) 17 1 (6%) 5.0 (29%) 5.0 (29%) 1.0 (6%)
re compile pattern(gawk) 787 1 (0%) 1202.7 (153%) 47.5 (6%) 1.0 (0%)
gs
copy block(gzip) 17 2 (12%) 2.5 (15%) 2.5 (15%) 2.0 (12%)
compile program(sed) 145 1 (1%) 80.1 (55%) 60.0 (41%) 1.0 (1%)
re search 2(sed) 486 20 (4%) 1328.7 (273%) 50.0 (10%) 21.0 (4%)
squeeze filter(tr) 33 8 (2%) 16.3 (49%) 15.5 (47%) 8.0 (24%)
total 2692 82 (3.0%) 6505.3 (241.7%) 704.1 (26.2%) 83 (3.1%)
in code size is for both methods 2.9%. Comparing the results of ONS and CNSH lead to the conclusion that CNSH
performs very close to the optimum. In our experiments there was only one procedure with a very small difference.
The results in the tables 1 and 2 show an substantial improvement when using CNSH. But the question is: what is
the impact of the code expansion, when using a more simple method like UCNS, on the code size of the complete
program. If the impact is small then why bother, except for the theoretical aspects. In the tables 3 and 4 the effects
for the complete code expansion are shown. All procedures of the benchmarks that have an irreducible control
flow graph are converted to procedures with a reducible control flow graph. In table 3 we show the impact in basic
blocks and in table 4 the impact on the code size is listed. The first column of both tables lists the program name,
the second column list the total number of basic blocks or instructions. The remaining columns list the increase
in basic blocks or in instructions for each method.
As can be seen from the tables the impact of node splitting can be substantial in terms of number of basic blocks or
instructions. As we can see, for UCNS the average increase in basic blocks is 15.4% and in instructions it is 13.6%.
Using UCNS can even result in a code size increase of 80% for the program bison. When using controlled node
splitting the increases are smaller and quite acceptable. CNSH results as expected in the smallest increases for
both quantities. These results show the importance of a clever transformation of irreducible control flow graphs.
6 Conclusions
A method has been given which transforms an irreducible control flow graph to a reducible control flow graph.
This gives us the opportunity to exploit ILP over a larger scope than a single basic block for any program. The
method is based on node splitting. To achieve the minimum number of splits the set of possible candidate nodes
is limited to nodes with specific properties. Since splitting of these nodes can result in a minimum resulting code
size the algorithm can be used to prevent uncontrolled growth of the code size. Because the computation time to
determine the optimum split sequence is (very) large, a heuristic has been developed.
Table
2: The number of copied instructions.
instructions ONS UCNS CNS CNSH
atof
output program(bison) 59 9 (15%) 41.5 (70%) 41.5 (70%) 9.0 (15%)
copy
copy guard(bison) 880
copy action(bison) 858 9 (1%) 2961.4 (345%) 122.5 (14%) 9.0 (1%)
next file(expand)
re compile pattern(gawk) 2746 1 (0%) 4106.9 (150%) 218.5 (8%) 1.0 (0%)
gs
s LZWD read buf(gs) 228 62 (27%) 95.0 (42%) 95.0 (42%) 62.0 (27%)
copy block(gzip) 88 4 (5%) 7.5 (9%) 7.5 (9%) 4.0 (5%)
compile program(sed) 693 2 (0%) 391.4 (56%) 267.5 (39%) 2.0 (0%)
re search 2(sed) 1857 91 (5%) 4803.7 (259%) 227.5 (12%) 93.0 (5%)
squeeze filter(tr) 119 22 (18%) 57.0 (48%) 55.5 (47%) 22.0 (18%)
total 11588 335 (2.9%) 27291.7 (235.5%) 3484.1 (30.1%) 337 (2.9%)
The method with the heuristic is called controlled node splitting with heuristic. This method is applied to a set of
procedures which contain irreducible control flow graphs. The results are compared with the results of the other
methods; these methods are uncontrolled node splitting and controlled node splitting. From our experiments it
follows that uncontrolled node splitting can lead to an enormous number of copies, the average growth in code
size per procedure is 235.5%. Controlled node splitting performs better (30.1%) but there is still a big gap with the
optimal case. We observed that the average number of copies when using controlled node splitting with heuristic
is very close to that of the optimum; the average growth in code size per procedure for both methods is 2.9%.
We also looked at the impact on the total code size of the benchmarks containing procedures with irreducible
control flow graphs. The same methods used as for the analysis per procedure are used. For CNSH the impact on
the total code size is very small, only 0.2% on average. The impact of UCNS is however surprisingly large. An
average of code size growth of 13.6% with a maximum for bison of 80%.
Table
3: The increase of basic blocks per program.
Program # basic blocks ONS UCNS CNS CNSH
bison 4441 14 (0%) 3501.7 (79%) 222.8 (5%) 14.0 (0%)
expand 1226 1 (0%) 5.0 ( 0%) 5.0 (0%) 1.0 (0%)
gs 16514
sed 3823 21 (1%) 1408.8 (37%) 110.0 (3%) 22.0 (1%)
tr 1554 8 (1%) 16.3 ( 1%) 15.5 (1%) 8.0 (1%)
total 43116 108 (0.3%) 6631.2 (15.4%) 754.1 (1.7%) 109.0 (0.3%)
Table
4: The increase of instructions per program.
instructions ONS UCNS CNS CNSH
bison 19689 63 (0%) 15858.4 (80%) 983.8 (5%) 63.0 (0%)
expand
gs 85824 210 (0%) 2169.7 (3%) 1804.1 (2%) 210.0 (0%)
sed 17489 93 (1%) 5195.1 (30%) 495.0 (3%) 95.0 (1%)
tr
total 205073 452 (0.2%) 27884.0 (13.6%) 3695.4 (1.8%) 454.0 (0.2%)
--R
Global instruction scheduling for superscalar machines.
Elimination algorithms for data flow analysis.
Taming control flow: A structured approach to eliminating goto statements.
A control-flow normalization algorithm and its complexity
Some analysis techniques for optimizing computer programs.
Flow Analysis of Computer Programs.
On certain graph-theoretic properties of programs
A basis for program optimization.
A program data flow analysis procedure.
Flow graph reducibility.
--TR
Compilers: principles, techniques, and tools
Elimination algorithms for data flow analysis
Global instruction scheduling for superscalar machines
A Control-Flow Normalization Algorithm and its Complexity
An elimination algorithm for bidirectional data flow problems using edge placement
A new framework for exhaustive and incremental data flow analysis using DJ graphs
loops using DJ graphs
A Fast and Usually Linear Algorithm for Global Flow Analysis
Fast Algorithms for Solving Path Problems
A program data flow analysis procedure
Microprocessor Architectures
Flow Analysis of Computer Programs
Transport-Triggering versus Operation-Triggering
--CTR
Han-Saem Yun , Jihong Kim , Soo-Mook Moon, Time optimal software pipelining of loops with control flows, International Journal of Parallel Programming, v.31 n.5, p.339-391, October
Sebastian Unger , Frank Mueller, Handling irreducible loops: optimized node splitting versus DJ-graphs, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.4, p.299-333, July 2002
Fubo Zhang , Erik H. D'Hollander, Using Hammock Graphs to Structure Programs, IEEE Transactions on Software Engineering, v.30 n.4, p.231-245, April 2004
Reinhard von Hanxleden , Ken Kennedy, A balanced code placement framework, ACM Transactions on Programming Languages and Systems (TOPLAS), v.22 n.5, p.816-860, Sept. 2000 | instruction-level parallelism;control flow graphs;reducibility;irreducibility;compilation;node splitting |
270413 | Optimal Parallel Routing in Star Networks. | AbstractStar networks have recently been proposed as attractive alternatives to the popular hypercube for interconnecting processors on a parallel computer. In this paper, we present an efficient algorithm that constructs an optimal parallel routing in star networks. Our result improves previous results for the problem. | Introduction
The star network [2] has received considerable attention recently by researchers as a graph model
for interconnection network. It has been shown that it is an attractive alternative to the widely used
hypercube model. Like the hypercube, the star network is vertex- and edge-symmetric, strongly
hierarchical, and maximally fault tolerant. Moreover, it has a smaller diameter and degree while
comparing with hypercube of comparable number of vertices.
The rich structural properties of the star networks have been studied by many researchers. The
n-star network S n is a degree 1)-connected, and vertex symmetric Cayley graph [1, 2].
Jwo, Lakshmivarahan, and Dhall [15] showed that the star networks are hamiltonian. Qiu, Akl,
and Meijer [22] showed that the n-star network can be decomposed into (n \Gamma 1)! node-disjoint paths
of length can be decomposed into (n \Gamma 2)! node-disjoint cycles of length (n \Gamma 1)n. Results
in embedding hypercubes into star networks have been obtained by M. Nigam, S. Sahni, and B.
Krishnamurthy [20] and by Miller, Pritikin, and Sudborough [18]. Broadcasting on star networks
has also been studied recently [3, 4, 14, 23].
Routing on star networks was first studied by Akers and Krishnamurthy [2] who derived a
formula for the length of the shortest path between any two nodes in a star network and developed an
efficient algorithm for constructing such a path. Recently, parallel routing, i.e., constructing node-disjoint
paths, on star networks has received much attention. Sur and Srimni [24] demonstrated
that node-disjoint paths can be constructed between any two nodes in S n in polynomial time.
Dietzfelbinger, Madhavapeddy, and Sudborough [11] derived an improved algorithm that constructs
node-disjoint paths of length bounded by 4 plus the diameter of S n . The algorithm was further
improved by Day and Tripathi [10] who developed an efficient algorithm that constructs
paths of length bounded by 4 plus the distance from u to v in S n . The problem was also
investigated by Jwo, Lakshmivarahan, and Dhall [16]. Misic and Jovanovic [19] derived a general
algebraic expression for all (not necessarily node-disjoint) shortest paths between any two nodes
in S n . Palis and Rajasekaran [21], Dietzfelbinger, Madhavapeddy, and Sudborough [11], Qiu, Akl,
and Meijer [22], and Chen and Chen [9] have considered the problem of node-disjoint paths between
two sets of nodes in a star network.
In this paper, we will improve the previous results on node-to-node parallel routing in star
networks by developing an efficient algorithm that constructs optimal parallel routing between any
two nodes in a star network. More specifically, let u and v be any two nodes in the n-star network
Department of Computer Science and Engineering, Tatung Institute of Technology, Taipei 10451, Taiwan, R.O.C.
Supported in part by Engineering Excellence Award from Texas A&M University. Email: ccchen@cse.ttit.edu.tw.
y Corresponding author. Department of Computer Science, Texas A&M University, College Station,
3112. Supported in part by the National Science Foundation under Grant CCR-9110824. Email: chen@cs.tamu.edu.
S n and let dist(u; v) be the distance from u to v in S n . The bulk length of a group of
node-disjoint paths connecting u and v in S n is defined to be the length of the longest path in the
group. Define the bulk distance Bdist(u; v) between u and v to be the minimum bulk length over
all groups of connecting u and v in S n . We develop an O(n 2 log n) time
algorithm that, given two nodes u and v in S n , constructs a group of node-disjoint paths of
bulk length Bdist(u; v) that connect the nodes u and v in S n .
Our algorithm involves careful analysis on the lower bound on the bulk distance Bdist(u; v)
between two nodes u and v in S n , a non-trivial reduction from the parallel routing problem on star
networks to a combinatorial problem called Partition Matching, a subtle solution to the Partition
Matching problem, and a number of routing algorithms on different kinds of pairs of nodes in a
star network. The basic idea of the algorithm can be roughly described as follows. Let u and v
be two nodes in the n-star network S n . According to Day and Tripathi [10], the bulk distance
Bdist(u; v) is equal to dist(u; v) or 4. We first derive a necessary and sufficient
condition for a pair of nodes u and v to have bulk distance dist(u; 4. For a pair u and v
whose bulk distance is less than dist(u; v) + 4, we develop an efficient algorithm that constructs
a group of node-disjoint paths of bulk length dist(u; v) between u and v. Finally,
an efficient algorithm, which is obtained from a reduction of an efficient algorithm solving the
Partition Matching problem, is developed that constructs the maximum number of node-disjoint
shortest paths (of length dist(u; v)) between u and v. Combining all these analysis and algorithms
gives us an efficient algorithm that constructs an optimal parallel routing on star networks. We
should also point out that the running time of our algorithm is almost optimal (differs at most by
a log n factor) since a lower
running time of parallel routing algorithms on star
networks can be easily derived.
Preliminary
A permutation of the symbols can be given by a product of disjoint cycles
[5], which is called the cycle structure of the permutation. A cycle is nontrivial if it contains more
than one symbol. Otherwise the cycle is trivial. The cycle containing the symbol 1 will be called
the primary cycle in the cycle structure. A -[1; i] transposition on the permutation u is to exchange
the positions of the first symbol and the ith symbol in u: -[1;
It is sometimes more convenient to write the transposition -[1; i](u) on u as -[a i ](u) to indicate
that the transposition exchanges the positions of the first symbol and the symbol a i in u. Let us
consider how a transposition changes the cycle structure of a permutation. Write u in its cycle
structure
If a i is not in the primary cycle, then -[1; i] "merges" the cycle containing a i into the primary
cycle. More precisely, suppose that a (note that each cycle can be cyclically permuted and
the order of the cycles is irrelevant), then the permutation -[1; i](u) will have the following cycle
structure:
If a i is in the primary cycle, then -[1; i] "splits" the primary cycle into two cycles. More precisely,
suppose that a we have let a 1n 1 (note that a
we assume i ? 1), then -[1; i](u) will have the following cycle structure:
In particular, if a then we say that the transposition -[1; i] "deletes" the symbol a 11 from
the primary cycle.
Note that if a symbol a i is in a trivial cycle in the cycle structure of the permutation
a 1 a 1 \Delta \Delta \Delta a n , then the symbol is in its "correct" position, i.e., a that if a symbol is in a
nontrivial cycle, then the symbol is not in its correct position. Denote by " the identity permutation
every symbol is in a trivial cycle.
The n-dimensional star network (or simply the n-star network) S n is an undirected graph
consisting of n! nodes labeled with the n! permutations on symbols There is an edge
between two nodes u and v in S n if and only if there is a transposition -[1; i],
that v. A path from a node u to a node v in the n-star network S n corresponds to
a sequence of applications of the transpositions -[1; i], starting from the permutation
u and ending at the permutation v. A (node-to-node) parallel routing in S n is to construct the
maximum number of node-disjoint paths between two given nodes in S n . Since the n-star network
S n is (n \Gamma 1)-connected [2] and every node in S n is of degree theorem [17], a
parallel routing in S n should produce exactly between two given nodes.
Moreover, since the n-star network is vertex symmetric [2], a parallel routing in the n-star network
any two given nodes can be easily mapped to a parallel routing between a node and
the identity node " in S n . We will write dist(u) and Bdist(u) for the distance dist(u; ") and the
bulk distance Bdist(u; "), resepctively. By the definitions, we always have Bdist(u) - dist(u).
Let u be a node in the n-star network with cycle structure are
nontrivial cycles and e j are trivial cycles. If we further let
denotes the
number of symbols in the cycle c i , then the distance from the node u to the identity node " is given
by the following formula [2].
if the primary cycle is a trivial cycle
if the primary cycle is a nontrivial cycle
Combining this formula with the above discussion on the effect of applying a transposition on a
permutation, we derive the following necessary rules for tracing a shortest path from the node u to
the identity node " in the n-star network S n .
Shortest Path Rules
Rule 1. If the primary cycle is a trivial cycle in u, then in the next node on any shortest
path from u to ", a nontrivial cycle c i is merged into the primary cycle. This corresponds
to applying transposition -[a] to u with a 2 c
Rule 2. If the primary cycle c nontrivial cycle in u,
where a 1n 1 then in the next node on any shortest path from u to ", either a
nontrivial cycle c i 6= c 1 is merged into the primary cycle (this corresponds to applying
transposition -[a] to u, where a 2 c i ), or the symbol a 11 is deleted from the primary
cycle c 1 (this corresponds to applying transposition -[a 12 ] to u).
Fact 2.1. A shortest path from u to " in S n is obtained by a sequence of applications of the
Shortest Path Rules, starting from the permutation u.
Fact 2.2. If a symbol a 6= 1 is in a trivial cycle in u, then a will stay in a trivial cycle in any
node on a shortest path from u to ".
Fact 2.3. If an edge [u; v] in S n does not lead to a shortest path from u to ", then
Consequently, let P be a path from u to " in which exactly k edges do not follow the
Shortest Path Rules, then the length of the path P is equal to dist(u)
Fact 2.4. [10] There are no cycles of odd length in an n-star network. Consequently, the
length of any path from a node u to the node " is equal to dist(u) plus an even number.
Given a node in the n-star network S n , where c i are nontrivial cycles and
e j are trivial cycles, a shortest path from u to " can be constructed through the following two-stage
process:
1. Merge in an arbitrary order each of the nontrivial cycles into the primary cycle. This will
result in a node whose cycle structure has a single nontrivial cycle, which is the primary
cycle. For example, suppose that c
1) is the primary cycle in u and that
by applying the transpositions -[a 21 ], -[a
in this order, we obtain a node with cycle structure
(a k1 \Delta \Delta \Delta a kn k
a
2. Delete each of the symbols in the primary cycle until the node " is reached. For example,
suppose that we start with the node with cycle structure as described in Equation (1), then
we apply the transpositions -[a k2
-[1], in this order, to reach the node ".
The above process will be called the Merge-Delete process. It is easy to verify, using the Shortest
Path Rules, that the Merge-Delete process produces a shortest path from the node u to the node "
in the n-star network S n . The most important property of the Merge-Delete process is that in each
node (except the node ") of the constructed shortest path, the primary cycle is of form
1),
where a 1n 1 is fixed for all nodes on the path.
Parallel routing from a node u to " in S n is particularly simple when the primary cycle in u is a
trivial cycle [10, 16]. For completeness, we describe the routing process for this case here. Let u be
such a node in S n . For each symbol i, 2 - i - n, construct a path P i as follows. The path P i starts
with the edge [u; -[i](u)] followed by a shortest path from the node -[i](u) to ", which is obtained
by applying the Merge-Delete process starting from the node -[i](u). It is easy to verify that if i
is in a nontrivial cycle in u, then the path P i has length dist(u), and that if i is in a trivial cycle
in u then the path P i has length dist(u) 2. On the other hand, if any symbol i, 2 - i - n, is in
a trivial cycle in u, then any path from u to " via -[i](u) has length at least dist(u) 2. Since the
node u has degree any group of node-disjoint paths from u to " contains a path whose
first edge is [u; -[i](u)]. This implies that the bulk distance Bdist(u) is at least dist(u)
is a symbol i in a trivial cycle in u, for 2 - i - n. Therefore, to show that the constructed paths
length Bdist(u), we only need to prove that all these paths are node-disjoint.
In fact, since the first edge on P i is via -[i](u) and the subpath from -[i](u) to " on P i follows the
shortest path obtained by the Merge-Delete process, it is easy to verify that for each path P i , there
is a ng uniquely associated with the path P i such that for each interior node
on the path P i , the primary cycle is of form Therefore, no two of the
P can share a common interior node, and the paths P i , 2 - i - n, are node-disjoint. This gives
a group of node-disjoint paths of bulk length Bdist(u) from u to ".
Therefore, throughout the rest of this paper, we discuss the parallel routing problem in star
networks based on the following assumption:
Assumption 2.1.
The node u in the n-star network S n has cycle structure
are nontrivial cycles and e j are trivial cycles, and the primary cycle c
is a nontrivial cycle.
3 Nodes with bulk distance dist(u)
Day and Tripathi [10] have developed a routing algorithm that constructs
of bulk length dist(u) + 4 from a node u to " in the n-star S n . The basic idea of Day-Tripathi's
algorithm can be described as follows. For each symbol a construct a path P a in
which each node has a cycle of form (\Delta \Delta \Delta a1). This cycle distinguishes the path P a from the other
constructed paths. Let 1), be the node as described in
Assumption 2.1. We describe the path P a in three different cases.
Case 1. The symbol a is in a nontrivial cycle c i , for i ? 1. Without loss of generality, let
a). In this case, the first four nodes on the path P a are u
The rest of the path P a is constructed by applying the Merge-
Delete process, starting from the node u 4 .
Case 2. The symbol a is in a trivial cycle e (a). Then the first four nodes on the path
P a are u the rest of the path P a is
obtained by applying the Merge-Delete process starting from the node u 4 .
Case 3. The symbol a is in the cycle c 1 . Let a = a j , where 1 d. Then the first four nodes
on the path P a are u
simply let u discard the nodes u 1 , u 2 , and u 3 ), and the rest of the path P a is obtained
by applying the Merge-Delete process starting from the node u 4 .
It is easy to verify that in each of the above three cases, the nodes u 2 and u 3 on the path P a are
not contained in any other constructed path P a 0 for a 0 6= a. It is also easy to check that the fourth
node u 4 on the path P a contains a cycle of form the Merge-Delete process keeps
the cycle pattern along the path P a , we conclude that all these
P a , a are node-disjoint. Finally, by examining each constructed path P a , we find out
that at most two edges on P a do not follow the Shortest Path Rules. By Fact 2.3, the path P a has
length at most 4. This completes the description of Day-Tripathi's algorithm.
Combining Day-Tripathi's algorithm and Fact 2.4, we conclude that for any node u in the n-
star network S n , the bulk distance Bdist(u) is equal to dist(u) or 4. In this
section, we derive a necessary and sufficient condition for a node u to have bulk distance dist(u)+4.
Let P be a path in S n from u to ". We say that the path P leaves u with symbol a if the second
node on P is -[a](u), and we say that the path P enters " with symbol a 0 if the node next to the
last node " on P is -[a 0 ]("), which has a single nontrivial cycle (a 0 1).
Lemma 3.1 Let 1), be the node in the n-star network S n as
described in Assumption 2.1. Suppose that m ? minf2
jg. Then any group of
node-disjoint paths from u to " has bulk length at least
Proof. Assume that P 1 node-disjoint paths from u to " of bulk length
bounded by dist(u) 2. We show that in this case we must have
First note that for each symbol b that is in a trivial cycle in u, one of the paths P must
leave u with b, and one of the paths P must enter " with b.
Suppose that a path P i leaves u with a symbol b in a trivial cycle in u. By the Shortest Path
Rules, the node -[b](u) does not lead to a shortest path from u to ". By Fact 2.3,
1. On the other hand, the length of P i is bounded by dist(u) 2. Thus, starting from
the node -[b](u), the path P i must strictly follow the Shortest Path Rules. In particular, no node
on the path P i (including the node -[b](u)) can contain a cycle of form
This implies that the path P i must enter " with a symbol in the set fa d g [ S k
Since there are exactly m of the paths P leaving with symbols in trivial cycles in u, and
since all these paths are node-disjoint and must enter " with symbols in fa d g
that there are at least m different symbols in the set fa d g [
Now again consider the path P i leaving u with a symbol b in a trivial cycle in u. Let u i be the
first node on P i in which the primary cycle is a trivial cycle. Note that u i 6= u and u i 6= -[b](u).
Moreover, since the nontrivial cycles in the node -[b](u) are c 0
and the path P i strictly follows the Shortest Path Rules after the node -[b](u), every nontrivial
cycle in the node u i is a cycle in the set fc g. Therefore, the node u i , hence the path P i ,
corresponds to a subset of the set fc g.
Similarly, suppose that P h is a path among in a
trivial cycle in u. Since at node u, the symbol b is in a trivial cycle while at node -[b](") on the
path P h , the symbol b is in a nontrivial cycle, we can find two consecutive nodes u h and v h on the
path P h such that the symbol b is in a trivial cycle in the node u h while in a nontrivial cycle in the
node v h . By Fact 2.2, the edge [u h ; v h ] on the path P h does not follow the Shortest Path Rules.
By our assumption, the length of the path P h is bounded by dist(u) 2. We conclude that the
subpaths of P h from u to u h and from v h to ", respectively, must strictly follow the Shortest Path
Rules. Note that by our assumption on the nodes u h and v h , we must have and the
symbols b and 1 are in the same cycle in the node v h . Since the node
trivial cycles) is on the subpath of P h from v h to ", in order to let the subpath of P h from v h to " to
strictly follow the Shortest Path Rules, the node v h must have a cycle of form (b 1). Consequently,
the symbol 1 is in a trivial cycle in the node u h . Now since the subpath of P h from u to u h follows
strictly the Shortest Path Rules, every nontrivial cycle in the node u h must be one of the nontrivial
cycles in the node u. This proves that the node u h , thus the path P h , corresponds to a
subset of the set fc g.
As we have shown before, no path in P 1 leave u with a symbol b and enters
" with a symbol b 0 , where b and b 0 are symbols in trivial cycles in the node u. Therefore, there
are exactly 2m paths among leaves u or enters " with a symbol in a
trivial cycle in u. Each of these 2m paths corresponds to a subset in the set fc g. Since all
these paths are node-disjoint, we conclude that there are at least 2m different subsets of the set
g. That is, 2 equivalently, m - 2 k\Gamma2 .
Combined with Day-Tripathi's algorithm, Lemma 3.1 provides a sufficient condition for a node
u in the n-star network S n to have bulk distance 4. In the following, we show that this
condition is also necessary. For this, we demonstrate that when m - minf2 k\Gamma2
can always construct node-disjoint paths from u to " of bulk length dist(u) 2.
We first make some conventions. We assume that n - 3 since the parallel routing on an n-star
network S n for n - 2 is trivial. Let c be a cycle, given in any fixed cyclic order. We will denote by
[c] the sequence of the symbols in the cycle c. Therefore, ([c] a 1 \Delta \Delta \Delta a d ) is a cycle starting with the
symbols in the cycle c followed by the symbols a 1 a d . Recall that a cycle can be arbitrarily
cyclically rotated, still resulting in the same cycle. In many cases, we will discard irrelevant trivial
cycles in a cycle structure.
Let 1), be the node in the n-star network S n as described
in Assumption 2.1, and suppose m - minf2 k\Gamma2
jg. We describe our construction in
three different cases.
Case I. The number k of nontrivial cycles in u is equal to 1.
By the condition Therefore, the cycle structure of the node u
consists of a single (nontrivial) cycle 1). Note that the unique cycle in u contains
at least 3 symbols since n - 3. The node-disjoint paths from u to " are given as follows.
A path P 1 leaving u with a 2 and entering " with a n\Gamma1 is given by
(a 1 a 2 \Delta \Delta \Delta a
stands for a sequence of transpositions that repeatedly deletes a symbol from the
primary cycle.
A path P 2 leaving u with 1 and entering " with a 1 is given by
(a 1 a 2 \Delta \Delta \Delta a
For there is a path P j leaving u with a j and entering " with a j \Gamma1 , given by
It is easy to verify that each path P j , 1, has at most one edge that does not
follow the Shortest Path Rules. Thus, all these constructed paths have length at most dist(u) 2.
The path P 1 keeps a cycle of form 1). The path P 2 keeps a cycle of form 1). For
the first part of the path P j keeps a distinguished cycle (a 1 the second
part of the path P j keeps a cycle of form 1). Therefore, all these constructed paths are
node-disjoint. This gives in this case a group of node-disjoint paths of bulk length bounded
by from u to ".
Case II. The number k of nontrivial cycles in u is at least 2, and the number m of trivial
cycles in u is 0.
In this case, the node u can be written as
A path P 1 leaving u with 1 and entering " with a d is given by
stands for a sequence of transpositions that repeatedly merges a nontrivial cycle
into the primary cycle.
For there is a path P j leaving u with a j and entering " with a j \Gamma1 , given by
Note that this group of paths is constructed only when d - 2.
For each symbol a 2 [ k
there is a path P a leaving u with a and entering " with a. Since
each nontrivial cycle can be cyclically rotated and the order of the cycles c 2
we can assume, without loss of generality, that c path P a is given by
It is again easy to verify that all these are of length at most dist(u)
disjoint. Thus, this gives in this case a group of node-disjoint paths of bulk length bounded
by from u to ".
Case III. The number k of nontrivial cycles in u is at least 2 and the number m of trivial
cycles in u is at least 1.
In this case, the node u can be written as
are nontrivial cycles and (b 1 are trivial cycles.
A path P 1 leaving u with b 1 and entering " with a d is constructed as follows.
A path P 0
leaving u with a 2 and entering " with b 1 is given by
If then the above constructed path
leaves u with the symbol 1 and enters " with b 1 .
For each i, we construct two paths P i and P 0
i as follows. First mark all symbols
in [ k
as "unused", and mark all subsets of the set as "unused". Then for
each pick an unused symbol a 0
and an unused subset S
from the set such that S i 6= S and a 0
i is contained in a cycle in the subset S i . Let
loss of generality, we assume that the cycle c (i)
contains the
leaves u with a 0
i and enters " with b i , and is given by
c (i)
(a d 1)c (i)
and the path P 0
leaves u with b i and enters " with a 0
, and is given by
c (i)
(b
c (i)
Now mark the symbol a 0
i and the subsets fc (i)
and fc (i)
k g of the set fc
as "used".
We must justify that the construction of the paths P i and P 0
i is always possible. Since m -
there are at least
Therefore, for each i,
should always be able to find an unused symbol a 0
Now fix the symbol a 0
. There are
different subsets S 0 of the set such that S 0 6= S and a cycle in S 0 contains
the symbol a 0
at least one of such subsets has not been used for constructing
the previous paths P j and P 0
(note that in the two subsets fc (j)
and
used for the paths P j and P 0
only one of them has a cycle containing the symbol
a 0
Also note that if S 0 is an unused subset of S, then S \Gamma S 0 is also an unused subset of S.
Therefore, we are always able to find an unused subset S i of S such that S i 6= S and a cycle in S i
contains the symbol a 0
i . This ensures the possibility of the construction of the paths P i and P 0
If the cycle c contains more than two symbols, i.e., d - 2, then we also construct
the following
The path Q 2 leaves u with the symbol 1 and enters " with a
and for d, the path Q j leaves u with a j and enters " with a
Finally, for each symbol a in [ k
that is not used in constructing the paths P i and P 0
we construct a path P a leaving u with a and entering " with a. Without loss of
generality, let the symbol a be in the cycle
The above process constructs paths from the node u to ". It is easy to verify that each
of the constructed paths has at most one edge that does not follow the Shortest Path Rules. By
Fact 2.3, all these are of length at most dist(u) 2. What remains is to show that all
these paths are node-disjoint.
On each of the constructed paths, the nodes have a special cycle pattern to ensure that the
path is node-disjoint from the other paths.
1. Path P 1 consists of two parts of different format: each node in the first part has a cycle of
and each node in the second part has a single nontrivial cycle of form
1), where the symbol a d is uniquely associated with the path
2. Path P 0
1 consists of two parts of different format: each node in the first part has a cycle
structure of form and each node in the second part has a cycle of form
where the symbol b 1 is uniquely associated with the path P 0
3. Path P i , consisits of three parts of different format: each node in the first part
has a cycle of form
1), each node in the second part has a cycle structure
of form
, where the subset fc (i)
k g is uniquely associated with the
path and each node in the third part has a cycle of form where the symbol b i is
uniquely associated with the path P i . Note that the assumption S i 6= S is necessary here -
otherwise, the path P i would enter " with the symbol a d and would share a common interior
node with path
Similarly, path P 0
consists of three parts of different format: each node in
the first part has a cycle of form 1), each node in the second part has cycle
structure of form
, where the subset fc (i)
g is uniquely associated with
the path P 0
and each node in the third part has a cycle of form
1), where the symbol
a 0
i is uniquely associated with the path P 0
. Again here the condition S i 6= S is necessary -
otherwise, the third node of the path P 0
i would be the node u again and the path P 0
would
then leave u with the symbol a 2 and share an interior node with the path P 0
1 .
4. The second node of the path Q 2 has a distinguished cycle (a 1 \Delta \Delta \Delta a d ), and the rest of the nodes
in the path Q 2 have a cycle of form (\Delta \Delta \Delta a 1 1), where the symbol a 1 is uniquely associated with
the path
5. The path Q d, consists of two parts of different format: each node in the first
part has a distinguished cycle (a 1 each node in the second part has a cycle of
the symbol a j \Gamma1 is uniquely associated with the path Q
6. For each symbol a that is not used in the paths P i and P 0
m, the second node
in the path P a has a cycle (aa 1), the third node in the path P a has a cycle
and the rest of the nodes in P a have a cycle of form (\Delta \Delta \Delta a1), where the
symbol a is uniquely associated with the path P a .
Therefore, all these constructed paths are node-disjoint. Summarizing all the above
discussions, we obtain the following lemma.
Lemma 3.2 Let 1), be the node in the n-star network S n as
described in Assumption 2.1. If m - minf2 k\Gamma2
a group of
paths of bulk length dist(u) + 2 from u to " can be constructed in time O(n 2 log n).
Proof. The construction of a group of node-disjoint paths of bulk length dist(u) + 2 from
u to " has been given in the above discussion.
For each P of the paths
and P a , where a is a symbol in [ k
is not used in the construction of the paths P i and P 0
m, the construction of P can be
obviously implemented in time O(n). (Note that according to [2],
To construct the paths P i and P 0
i , we need to find an unused symbol a 0
. The symbol a 0
i can be
simply found in time O(n) if we keep a record for each symbol to indicate whether it has been used.
Once the symbol a 0
i is decided, we need to find an unused subset S i of the set g. For
this, suppose that the symbol a 0
i is in cycle c 0 . We pick arbitrarily another
r from the set S (note that by our assumption, log m+ 2 - k). Now instead of using the set S, we
use the set S
r g. Since the set S 0 has
and the cycle c 0 is in S 0 , at least one of these subsets has not been used. These subsets of S 0
can be enumerated in a systematic manner in time O(r2 r log n) since each
such subset contains at most r = O(log m) cycles. Now an unused subset of S 0 is also an unused
subset of S, with which we can construct the paths P i and P 0
i in time O(n). In conclusion, the
paths
i can be constructed in time O(n log n) for each
total time needed to construct the paths P i and P 0
is bounded by O(n 2 log n).
Combining Lemma 3.1 and Lemma 3.2, we obtain immediately the following theorem.
Theorem 3.3 Let 1), be the node in the n-star network S n
as described in Assumption 2.1. The bulk distance Bdist(u) from the node u to the identity node "
is
4 The maximum number of node-disjoint shortest paths
In this section we diverge to a slightly different problem. Let u be the node in the n-star network
S n as described in Assumption 2.1. How many node-disjoint shortest paths (of length dist(u)) can
we find from u to "?
This problem is closely related to a combinatorial problem, called Maximum Partition Matching,
formulated as follows:
be a collection of subsets of the universal set ng
such that S k
partition matching (of order m)
of S consists of two ordered subsets of m
elements of U (the subsets L and R may not be disjoint), together with a sequence of m
distinct partitions of S: such that for all
a i is contained in a subset in the collection A i and b i is contained in a subset in the
collection B i . The Maximum Partition Matching problem is to construct a partition
matching of order m for a given collection S with m maximized.
Theorem 4.1 The Maximum Partition Matching problem is solvable in time O(n 2 log n).
Proof. An O(n 2 log n) time algorithm has been developed in [7] that, given a collection S of
subsets in constructs a maximum partition matching of S. We refer our readers
to [7] (and [6]) for details.
Now we show how Theorem 4.1 can be used to find the maximum number of node-disjoint
shortest paths from a node u to the identity node " in star networks.
Lemma 4.2 Let 1), be the node in the n-star network S n as
described in Assumption 2.1. Then the number of node-disjoint shortest paths from u to " cannot
be larger
Proof. According to Rule 2 of the Shortest Path Rules, only the path leaving u with a symbol
in the set fa 2 g [
can be a shortest path from u to ".
Another upper bound on the number of node-disjoint shortest paths from u to " can be derived
in terms of the maximum partition matching of the collection of nontrivial cycles
in u, where each nontrivial cycle is regarded as a set of symbols.
Lemma 4.3 Let 1), be the node in the n-star network S n as
described in Assumption 2.1. Then the number of node-disjoint shortest paths from u to " cannot
be larger than 2 plus the number of partitions in a maximum partition matching in the collection
g, where the cycles c i are regarded as sets of symbols.
Proof. Let shortest paths from u to ". For each path P i , let u i
be the first node on P i such that in u i the primary cycle is a trivial cycle. The node u i is obtained
by repeatedly applying Rule 2 of the Shortest Path Rules, starting from the node u. It is easy
to prove, by induction, that in any node v on the subpath from u to u i on the path P i , the only
possible nontrivial cycle that is not in fc is the primary cycle. In particular, the node u i
must have a cycle structure of the form
are nontrivial cycles, e 0
are trivial cycles and B
g is a subcollection of the
collection g.
Assume that the path P i leaves u with the symbol b i . By Rule 2 in the Shortest Path Rules,
b i is either a 2 or one of the symbols in [ k
also by this rule, once b i is contained in
the primary cycle in a node in the path P i , it will stay in the primary cycle in the nodes along the
path P i until it is deleted from the primary cycle, i.e., until b i is contained in a trivial cycle. In
particular, the symbol b i is not in the set [ k i
.
Now suppose that the path P i enters the node " with a symbol d i . Thus, the node w
on the path P i must have a cycle structure (d i 1) if we discard trivial cycles in w i . Since the symbol
d i is in a nontrivial cycle in w i , by Fact 2.2, d i is also in a nontrivial cycle in the node u i , that is
. The only exception is d (in this case u
Now we let A . Then we can conclude that except for at most two paths P 1 and P 2 ,
each of the other paths P 3 must leave the node u with a symbol b i in A i and
enter " with a symbol d i in B i . (The two exceptional paths P 1 and P 2 may leave u with the symbol
a 2 or enter " with the symbol a d .)
Now since the s paths are node-disjoint, the symbols b 3 are all pairwise
distinct, and the symbols d 3 are also pairwise distinct. Moreover, since all nodes u
are pairwise distinct, the collections B 3 of cycles are also pairwise distinct. Consequently,
the partitions of the collection together with the symbol
sets partition matching of the collection S.
This concludes that s cannot be larger than 2 plus the number of partitions in a maximum
partition matching of the collection thus proves the lemma.
Now we show how we construct a maximum number of node-disjoint shortest paths from the
node 1), as described in Assumption 2.1, to the identity node
" in the n-star network S n . We first show how to route a single shortest path from u to ", given a
partition (A; B) of the collection g, and a pair of symbols b and d, where b is in a
cycle in A and d is in a cycle in B. We also allow b to be a 2 - in this case d must be in [ k
S. Similarly, we allow d to be a d - in this case b must be in [ k
S. Consider the algorithm Single Routing given in Figure 1. Since the algorithm Single
Routing starts with the node u and applies only transpositions described in the Shortest Path
Rules, we conclude that the algorithm Single Routing constructs a shortest path from the node
u to the node ".
Now we are ready for describing the final algorithm. Consider the algorithm Maximum Shortest
Routing given in Figure 2.
Algorithm. Single Routing
input: A partition (A;B) of and two symbols b and d, where b is in a cycle in A
and d is in a cycle in B. b can be a2 with A = OE, and d can be ad with
output: A shortest path from u to " leaving u with b and entering " with d.
1. if b 6= a2
apply -[b] to u to merge the cycle in A that contains b into the primary cycle c1 ; then merge in
an arbitrary order the rest of the cycles in A into the primary cycle;
2. repeatedly delete symbols in the primary cycle until the primary cycle becomes a trivial cycle;
3. if d 6= ad
suppose that the cycle c containing d in B is Apply -[d 0 ] to merge c into the primary
cycle (1); then merge in an arbitrary order the rest of the cycles in B into the primary cycle;
4. repeatedly delete symbols in the primary cycle until reach the node ";
Figure
1: The algorithm Single Routing
Algorithm. Maximum Shortest Routing
input: The node u in the n-star network Sn , as described in Assumption 2.1.
output: A maximum number of node-disjoint shortest paths from u to ".
1. Construct a maximum partition matching M [(b1 ds )] in ck g with the
partitions
2. if
shortest paths as follows.
2.1. Call the algorithm Single Routing with the partition (OE; S) of S and the symbol
2.2. Call the algorithm Single Routing with the partition (S; OE) of S and the symbol
2.3. For to s, call the algorithm Single Routing with the partition
the symbol pair (b
3. if s !
shortest paths as follows.
3.1. Let b0 ;
3.2. Call the algorithm Single Routing with the partition (OE; S) of S and the symbol
3.3. Call the algorithm Single Routing with the partition (S; OE) of S and the symbol
3.4. For to s, call the algorithm Single Routing with the partition
the symbol pair (b
Figure
2: The algorithm Maximum Shortest Routing
Theorem 4.4 The algorithm Maximum Shortest Routing constructs a maximum number of
node-disjoint shortest paths from the node u to the identity node " in time O(n 2 log n).
Proof. From Lemma 4.2 and Lemma 4.3, we know that the number of shortest paths constructed
by the algorithm Maximum Shortest Routing matches the maximum number of node-disjoint
shortest paths from u to ". What remains is to show that all these paths are node-disjoint.
Suppose that the algorithm Maximum Shortest Routing constructs h shortest paths
h from node u to node ", here depending on whether
suppose that the path P i is constructed by calling the algorithm Single Routing on partition
then we have
then we have d Now fix an i and consider the path P i , which is
constructed from the partition and the symbol pair (b
l g and
g, where if A i 6= OE then the cycle c (i)
2 is of form c (i)
then the cycle c (i)
l+1 is of form c (i)
Finally, recall that the primary cycle c 1 has the form
The interior nodes of the path P i can be split into three segments I (i)
2 , and I (i)
3 . The first
segment I (i)
1 corresponds to nodes constructed in step 1 of the algorithm Single Routing that
first merges cycle c (i)
2 into the primary cycle c 1 , obtaining a cycle of form (b
merges cycles c (i)
l into the primary cycle. Therefore, for all nodes in this segment, the
primary cycle is of the form
The second segment I (i)
2 corresponds to the nodes constructed by step 2 of the algorithm Single
Routing that deletes symbols in the primary cycle. All nodes in this segment are of the form
The third segment I (i)
3 corresponds to the nodes constructed by step 3 and step 4 of the algorithm
Single Routing, which first merges the cycle c (i)
l+1 into the primary cycle (1), obtaining a cycle of
merges the cycles c (i)
k into the primary cycle, and then deletes symbols
in the primary cycle. Therefore, in all nodes in this segment, the primary cycle should have the
In case A and the segment I (i)
1 is empty, and in case
a d and the segment I (i)
3 is empty.
We now show that any two shortest paths P i and P j , i 6= j, constructed by the algorithm
Maximum Shortest Routing are node-disjoint. Let v be a node on the path P i .
Suppose that
is a node on the first segment I (i)
1 of the path P i .
The node cannot be on the first segment I (j)
1 of the path P j since all nodes on I (j)
1 are of form
Moreover, the node v cannot be on the
second or the third segment of P j since the cycle structure of a node on the second or the third
segment of P j has more trivial cycles (note that each execution of step 2 of the algorithm Single
Routing creates a new trivial cycle in the cycle structure).
k is on the second segment I (i)
2 of the path P i , then v cannot be on the
second segment I (j)
of P j since each node on I (j)
2 is of form
k and
The node v can neither be on the third segment I (j)
3 of P j since each node on the segment I (j)
3 is of
while the primary cycle in the node v is either a trivial cycle
or of form (\Delta \Delta \Delta a d 1) where a d is in c 1 .
Finally, if is on the third segment of the path P i , then v cannot be on the third
segment of P j because d i 6= d j .
By symmetry, the above analysis shows that the two shortest paths P i and P j constructed by
the algorithm Maximum Shortest Routing must be node-disjoint.
The running time of the algorithm Maximum Shortest Routing is dominated by step 1
of the algorithm, which takes time O(n 2 log n) according to Theorem 4.1. Thus, the algorithm
Maximum Shortest Routing runs in time O(n 2 log n).
Construction of the maximum number of node-disjoint shortest paths between two nodes in star
networks was previously studied in [16], which presents an algorithm that runs in exponential time
in the worst case. More seriously, the algorithm seems based on an incorrect observation, which
claims that when there are more than one nontrivial cycles in a node u, the maximum number of
node-disjoint shortest paths from u to " is always an even number. Therefore, the algorithm in [16]
always produces an even number of node-disjoint shortest paths from u to ". A counterexample to
this observation has been constructed in [6].
5 Conclusion: an optimal parallel routing
Combining all the previous discussion in the present paper, we obtain an O(n 2 log n) time algorithm,
Optimal Parallel Routing as shown in Figure 3, that constructs node-disjoint paths of
bulk length Bdist(u) from any node u to the identity node " in the n-star network S n .
The correctness of the algorithm Optimal Parallel Routing has been proved by Lemma 3.1,
Lemma 3.2, Theorem 4.4, and the results in [10]. The running time of the algorithm is O(n 2 log n).
We would like to make a few remarks on the complexity of our algorithm. The bulk distance
problem on general graphs is NP-hard [13]. Thus, it is very unlikely that the bulk distance problem
can be solved in time polynomial in the size of the input graph. On the other hand, our algorithm
solves the bulk distance problem in time O(n 2 log n) on the n-star network. Note that the n-star
network has n! nodes. Therefore, the running time of our algorithm is actually a polynomial of the
logarithm of the size of the input star network. Moreover, our algorithm is almost optimal (differs
at most by a log n factor) since the following lower bound can be easily observed - the distance
dist(u) from u to " can be as large as \Theta(n). Thus, constructing node-disjoint paths from u
to " takes time at least \Theta(n 2 ) in the worst case.
--R
The star graph: an attractive alternative to the n-cube
A group-theoretic model for symmetric interconnection networks
A routing and broadcasting scheme on faulty star graphs
A Survey of Modern Algebra
Combinatorial and algebraic methods in star and de Bruijn networks
The maximum partition matching problem with applications
Optimal parallel routing in star networks
An improved one-to-many routing in star networks
A comparative study of topological properties of hypercubes and star graphs
Three disjoint path paradigms in star networks
Short length versions of Menger's theorem
Computers and Intractability: A Guide to the Theory of NP-Completeness
The cost of broadcasting on star graphs and k-ary hypercubes
Embedding of cycles and grides in star graphs
Characterization of node disjoint (par- allel) path in star graphs
Near embeddings of hypercubes into Cayley graphs on the symmetric group
Routing function and deadlock avoidance in a star graph interconnection network
Embedding hamiltonians and hypercubes in star interconnection graphs
Packet routing and PRAM emulation on star graphs and leveled networks
On some properties and algorithms for the star and pancake interconnection networks
An optimal broadcasting algorithm without message redundancy in star graphs
Topological properties of star graphs
--TR
--CTR
Cheng-Nan Lai , Gen-Huey Chen , Dyi-Rong Duh, Constructing One-to-Many Disjoint Paths in Folded Hypercubes, IEEE Transactions on Computers, v.51 n.1, p.33-45, January 2002
Eunseuk Oh , Jianer Chen, On strong Menger-connectivity of star graphs, Discrete Applied Mathematics, v.129 n.2-3, p.499-511, 01 August
Chi-Chang Chen , Jianer Chen, Nearly Optimal One-to-Many Parallel Routing in Star Networks, IEEE Transactions on Parallel and Distributed Systems, v.8 n.12, p.1196-1202, December 1997
Adele A. Rescigno, Optimally Balanced Spanning Tree of the Star Network, IEEE Transactions on Computers, v.50 n.1, p.88-91, January 2001 | shortest path;parallel routing;star network;partition matching;network routing;graph container |
270650 | Analysis for Chorin''s Original Fully Discrete Projection Method and Regularizations in Space and Time. | Over twenty-five years ago, Chorin proposed a computationally efficient method for computing viscous incompressible flow which has influenced the development of efficient modern methods and inspired much analytical work. Using asymptotic error analysis techniques, it is now possible to describe precisely the kind of errors that are generated in the discrete solutions from this method and the order at which they occur. While the expected convergence rate is seen for velocity, the pressure accuracy is degraded by two effects: a numerical boundary layer due to the projection step and a global error due to the alternating or parasitic modes present in the discretization of the incompressibility condition. The error analysis of the projection step follows the work of E and Liu and the analysis of the alternating modes is due to the author. The two are combined to show the asymptotic character of the errors in the scheme. Regularization methods in space and time for recovering full accuracy for the computed pressure are discussed. | Introduction
In 1968, Chorin [3] proposed a computationally efficient method for computing viscous,
incompressible flow. The method was based on the primitive variables, velocity and
pressure, with all unknowns at the same grid points. The discretization was centered
in space (second order in space step h) and implicit in time (first order in time step
with the projection part of the Stokes operator split off for computational efficiency.
The discretization of the incompressibility condition allowed for alternating (parasitic or
checkerboard) modes. The idea of a projection step has been used in modern efficient
methods (e.g. [6]) (since the literature in this field is vast, the references here and in
what follows are only intended to be illustrative not exhaustive). Many authors have
considered the analysis of the projection step [4, 15, 16, 9] and have proposed higher
order corrections [13, 18, 14]. The precise description of the errors from the projection
step in [9] will be used in this work. The effect of parasitic modes on the accuracy of the
computed pressure has also been an area of much interest, especially in the finite element
method [8] and spectal method [2] communities. It is well known that the presence of
parasitic modes can lead to a degradation in the convergence rate of the pressure. A
precise description of this effect for Chorin's discretization is given in [20].
What the work in [9, 20] does is to characterize the errors from the projection step and
the parasitic modes precisely for smooth problems with a smooth discretization in space
and time. These analyses will be combined in this paper to give a precise description of
the errors from Chorin's original fully discrete scheme. Although there are no significant
new difficulties that arise from the interaction of the discretizations in space and time,
the author believes it is worthwhile to give the full error analysis for this historically
important algorithm. A simplified presentation of the boundary layer effects from the
projection step is given.
We describe the numerical order reducing effects on the pressure briefly below. If it
is assumed that convergence of second order in h might be
expected. While this is true for the velocity, it is not true for the pressure. In fact, the
discrete pressure from Chorin's method has an O(k 1=2 ) numerical boundary layer due to
the projection step and an O(h) global error due to the alternating modes. In order to
recover full accuracy for the pressure, we also consider regularization methods in space
and time. Computational evidence for all predictions is given. The reader may wish to
take a short tour through pressure errors from Chorin's original scheme
O(h) alternating terms dominate, to a space regularized scheme only the
boundary layer is left, to a fully regularized scheme (Fig.5) where the errors are
spatially smooth.
Some discussion should be made here about the real and artificial limitations of this
work. First of all, the analysis is presented for the two-dimensional (2D) Stokes equations
with homogeneous Dirichlet boundary conditions but can be extended in a straightforward
way to 3D with nonhomogeneous compatible boundary conditions and to smooth
solutions of the nonlinear Navier-Stokes equations. Secondly, the error expansions presented
assume a great number of compatibility conditions at When these are not
satisfied, convergence is not uniform in some quantities up to (see [12] for computational
results and formal analysis of the behaviour near
Thirdly, the computational issue of how to efficiently implement the projection step is
not addressed in this work. We use a simplified geometry for our numerical tests in which
it is easy to implement an exact projection efficiently which allows us to obtain refined
solutions to verify the predicted error structure. Fourthly, the temporal regularization
discussed in the final section uses unsplit time integration. In the context of split-step
projection methods, it would have been more appropriate to present a pressure increment
scheme [18, 6] but the error analysis of these schemes is not well understood in the fully
discrete case [10]. Finally, because the discrete divergence and gradient operators are
not adjoint, a simple stability result based on energy estimates as used in [4, 9] is not
possible for Chorin's original method when boundaries are present. Thus, the stability
analysis of the method is still an open problem.
In the next section, Chorin's original method is described. Then, computational
results showing the boundary layer and alternating errors in the pressure are presented.
In Section 4 we present the error analysis for the method, describing the alternating and
boundary layer errors and at what order they occur. Finally, in Section 5 we present
analysis and computation of regularized methods.
2 Description of the Scheme
We consider fluid flow in a simplified domain: a two-dimensional (2D) [0; 1] \Theta [0; 1]
channel with fixed walls on the top and bottom boundaries and periodic in the horizontal
direction. The incompressible Stokes equations are given below
where are the velocities, p is the pressure and - is the kinematic viscosity.
Boundary conditions are used. Initial data u 0 is given and it
is assumed that r \Delta u We note that p can only be determined up to an arbitrary
constant. A unique p is recovered if we require
Z
It is well known that any square integrable vector function can be orthogonally decomposed
into a divergence-free part with homogeneous normal boundary conditions and a
part that can be represented as the gradient of a scalar (see e.g. [5]). In this framework,
we can interpret the pressure gradient rp as a term that projects the right hand side of
(1) onto the space of divergence free fields and summarize its action with the projection
operator P:
To describe the discrete scheme we approximate in space on a regular grid with spacing
h and in time with spacing k. It is assumed that 1=h is even for convenience. We use
and P n
to denote approximations of u(ih; jh; nk) and p(ih; jh; nk) respectively. To
proceed, we need to define the approximate "projection", P h , derived by Chorin [3]. We
use discrete divergence D h and gradient operators G h based on long, centered differences,
i.e.
away from boundaries. Near the lower boundary we can use the fact that U
derive
On the boundary, using second order one sided differencing gives
Similar expressions apply on the upper wall. To divide an arbitrary vector W defined in
the interior of the domain into a gradient part G h P and a discrete divergence free part
U (D h \Delta must solve
and then
We summarize this process as
denotes the scalar corresponding to the gradient part of the vector.
This projection approach is convenient because it does not require the specification of
any additional "pressure boundary conditions". Such conditions can be considered to be
implicitly given by (7). We note that P h is not a projection matrix since D h and G h are
not negative adjoint. Also, the matrix D h \Delta G h has four null modes corresponding to the
four null modes of G h , constant vectors on the four subgrids shown in Figure 1. However,
(7) is solvable [1] up to the four null vectors of G h . The four arbitrary constants are
normalized using appropriate trapezoidal or midpoint approximations of (3). From the
structure of G h P it is easy to see that the errors on the four subgrids can be different,
leading to so-called alternating error expansions. The order that these effects enter the
velocity and pressure is described in detail below.
We now turn to a discretization in time. Chorin [3] proposed splitting the diffusion
step and the projection step in the following scheme:
where \Delta h denotes the usual five point approximation of the Laplacian with Dirichlet
data. This scheme gives an uncoupled system for P n+1 and W n+1 , an auxiliary quantity
computed during the diffusion sub-step. The fact that the system is decoupled is the
advantage of using the split-step technique.
Table
1: Normalized pointwise pressure errors e p and velocity errors e u (and estimated
convergence rates in h) from Chorin's scheme.
We note that in Chorin's original work an ADI technique was used to approximate
(10) and an iterative technique was used to approximate the projection step (11). Here
we analyze the underlying exact discretization for simplicity and because more modern
solution techniques are available that can efficiently solve these subproblems.
We consider computational results for this method below, showing the numerical
boundary layers from the projection step and the alternating errors from the parasitic
modes. The detailed analysis of these phenomena is then done in Section 4.
Computational Results
We demonstrate the types of errors discussed above with computational results for the
Stokes equations in the periodic channel. The initial data from [19] is used (a perturbation
of Poiseuille flow) with 1=64. Errors are calculated by comparing the solutions
from Chorin's method with those from the Marker and Cell (MAC) grid with high order
accurate explicit time stepping (the discrete pressures from this scheme have no alternating
or boundary layer effects [11]). Comparisons are made at
When relatively large and k relatively small), the pressure
errors are dominated by the O(h) alternating errors from the parasitic mode effects as
shown in Figure 2. Note that the error alternates in sign in the vertical direction only
and is not confined to a region near the boundary. If the computation had been done
in a box with vertical walls as well as horizontal walls there would also be horizontally
alternating components of the error.
When relatively small and k relatively large) the pressure
errors are dominated by the boundary layer due to the projection step with size and
width O(k 1=2 ). This is seen in the top picture of Figure 3. A contour plot of this same
data has jagged contour lines, showing the presence of (smaller) alternating terms. When
k is reduced to 0:01, the boundary layer is reduced in size and extent as shown in the
lower picture of Figure 3.
The error plots above verify the qualitative description of the errors. To show their
order reducing effects on computed P we perform computations with several
with
. In
Table
1 we see that P converges with first order (in h) and that U
converges with second order (in h).
We develop a asymptotic error expansion for the pressure and computed velocity for
Chorin's original scheme (10)-(12) consisting of regular and alternating errors and numerical
boundary layer terms as described in [20]. We will use the asymptotic descriptions
of P h from [20] and present a simplified derivation of the errors from the split-step
time-stepping first given in [9].
It is convenient to first derive an error expansion for W, the intermediate velocity,
and then derive expansions for U and P from (11) and (12). The update equation for
W is
with boundary conditions
convenient scaling for this analysis. We index the errors by powers of h so O(k) errors
are listed as O(h 2 ) errors.
Part of the errors in W at grid point level n can be described by numerical
boundary layers of the form
where A 2 (x; t) is a smooth function that depends only the exact solution u and - depends
only on -. These errors appear at in the computed velocities W. Here
that (14) has a width of a fixed number of grid points in space and so will shrink
as the computation is refined (the size is thus similar boundary layer
will appear at the upper boundary. From now on, we will consider only the bottom
boundary explicitly. It will be shown below (in Lemma 1) that the projection of such a
boundary layer (14) is zero at O(1): This allows us to determine -. The boundary layer
should satisfy the discrete equations (13) exactly to highest order. Inserting (14) into
(13), using Lemma 1 and collecting terms of O(1) (so the differences in the x direction
can be neglected) we obtain
which reduces to a quadratic equation for -:
This equation has two real positive roots for every - ? 0 that occur in reciprocal pairs.
The root with magnitude less than one we denote by - (the other root describes the
boundary layer at the upper wall). The boundary layer (14) does not satisfy the boundary
conditions for \Delta h . In fact, it is generated by a mismatch in the boundary conditions for
W from the global error terms. The details of this are seen below. We note that numerical
boundary layers are normally associated with finite difference methods with wide stencils
that require additional, artificial boundary values to be specified. This is not the case in
(13). The boundary layer that arises in the projection method can be shown formally to
arise from a singular perturbation of the underlying pressure equations with a mismatch
in boundary conditions [9]. We now show the action of the discrete projection on the
boundary layer (14).
has values at grid point that tend asymptotically to
2 (x), a (2) (x; y) and - a (2) (x; y) are smooth functions determined by A 2 .
These functions are also smooth.
We discuss the notation and meaning of this lemma before turning to the proof.
In general, the superscript in brackets denotes the order a term appears and a subscript
denotes the component for a vector quantity. Vectors appear in bold. The term described
by a (2) is a smooth, global regular error term and the term described by - a (2) is an
alternating error term caused by the decoupled stencil for the pressure. Alternating
errors dominate in the pressure errors in Figure 2. What the expansions show is what
will be computed when the discrete projection operator acts on a boundary layer to
high accuracy (a weak but sufficient stability result for the projection step alone can be
derived). Later, we will write
as shorthand for (16). In following lemmas, we present expansions for P h acting on
boundary layers in the horizontal component and regular and alternating terms. We can
then derive an error expansion for W (as noted in the Introduction, a stability argument
for the scheme is still missing so convergence cannot be shown). Here, the terms in the
error expansion will show the order that the various types of errors appear. Expansions
for U and P follow easily. Here and in what follows we retain only the highest order
terms of each type except when necessary to explain some more subtle point. We return
now to the proof of Lemma 1.
Proof of Lemma 1 We denote To satisfy the interior
equations (7) (for Q not P ) the following conditions for the boundary conditions
apply
Where centered differencing of a boundary layer is like
multiplication by -h \Gamma1 ) and the primes denote differentiation in x. These equations
determine the C functions in terms of A
The interior equations for global terms from (7) are
\Deltaq
since there is no global source term. To determine q (2) and -
q (2) we derive Neumann
boundary conditions for them. In [20] it was shown that the effect of the reduced
stencils near the boundary (5), (6) was equivalent to the following two discrete
boundary conditions for Q:
~
where ~
centered differencing
in the x and y directions. These can be interpreted as pressure Neumann
boundary conditions. Using (17) these relationships are both satisfied at O(1). The
action of ~
B on a boundary layer is like multiplication by
smooth terms ~
to third order and for alternating terms, ~
f i;0 to
first order. We note that D y
f actually approximates \Gamma -
f y since centered differencing
uses adjacent grid points of opposite sign. Putting this together we find that
at second order (22) and (23) give the following relationships at
q (2)
y
and
q (2)
y
q (2)
y
These give solvable Neumann data for (19) and (20) and so determine q (2) and -
q (2) .
All the listed terms in the expansion for Q in (16) have been determined. Further
terms in the expansion can be determined similarly.
Having determined Q we can now determine
At O(1) the boundary layers cancel (recall centered differencing in y of a boundary
layer is like multiplication by -h \Gamma1 and note (17)). We then have
q (2)
where r due to the effect noted above. 2
We have shown Lemma 1 in some detail so the reader can see the idea of the technical
arguments. However, the important features of Lemma 1 are that a vertical boundary
layer is removed (to highest order) by the action of P h and that the boundary layer in
smaller by a factor of h. Later, we will see that there is a boundary layer of
size O(h 2 ) in W. This leads to a boundary layer of size O(h 3 ) in kP
and so a boundary layer of O(h) in P n+1 .
By taking P h of (15) and bringing all the boundary layers to the left hand side we
obtain the following Corollary. The fact that a (2) and - a (2) in Lemma 1 are pure gradients
is used with Lemmas 3 and 5 to show that the global errors are suppressed to fourth
order, although this is not important.
Corollary 2
What we have created is a "pure gradient" boundary layer that has given normal boundary
data at highest order. This is implicitly done in the spatially continuous analysis in
[9].
Lemmas describing the action of P h on regular and alternating terms (denoted by
are stated below. Proofs of Lemmas 3 and 5 can be found in [20].
Lemma 3 When a is a smooth function, P h a has the following error expansion:
when a is compatible. For incompatible a the error terms of both types will appear at first
order.
A compatible function a is one for which the tangential component of Pa also vanishes
on the boundary. A solution u of the Stokes equations and \Deltau are compatible as well
as pure gradient fields. We need a small refinement of this lemma for the error analysis
below. We note that on an alternating term D h \Delta - a approximates r \Delta -
a. The modified
projection P describes the projection onto divergence-* free fields with zero normal
boundary values, which is orthogonal to gradient-* fields. Details are given in [20].
Corollary 4 If a is divergence free then a (1) and a (2) are pure gradients. If a is divergence
free and compatible then a (3) is a pure gradient and - a (3) is a pure gradient-*
field.
Proof: We refer the reader to [20] for the details of the proof of Lemma 3 to make
this rigorous, but the idea is simple. We use
a. The error a (2) comes from two sources: rq (2) (a pure gradient) and
the second order errors from computing G h q instead of rq. When a is divergence
so the second source of error is not present and a (2) is a pure gradient.
Similar reasoning applies to the other statements. 2
Lemma 5 The discrete projection acting on an alternating term gives the following error
expansion:
a h- a (1)
The discrete projection acting on an alternating gradient-* field has an expansion beginning
at second order.
We are now in a position to state and prove the main error expansion result for the
intermediate computed velocities W:
Theorem 6 The intermediate velocities have the following error expansion
where u is the exact solution of the Stokes equations. That is, regular errors and vertical
boundary layers begin at second order and alternating errors and horizontal boundary
layers begin at third order.
Proof: For notational simplicity we assume . The divergence-free
and gradient parts of the regular errors are determined at different levels in
the discrete equations (13) so we divide the error terms explicitly
d
We similarly divide the alternating terms into divergence-* free and gradient-*
fields. We insert (27) into the discrete equations (13), expanding \Delta h in a Taylor
series as well as W n about the time level n+ 1. Regular interior terms are collected
below:
a (2) [u]
a (3) [u] (31)
d
\Deltau (2)
+a (2)
d
[u (2)
d
a (2)
d
+a (2)
a (2)
The terms a represent error terms from the discrete projection operator, with square
brackets to denote their source. The interior equations force P h
so
A (3)
using Corollary 4. To determine the equations for alternating terms we use the fact
that the discrete Laplacian amplifies alternating terms in the following way
(see [20]). The third order alternating terms in (13) give
-
boundary conditions are written below for normal component and tangential
component:
+A (2)
d
We will now discuss all of the terms above in detail. Equations (28), (29) and
boundary conditions (36) show that u is indeed the solution of the Stokes equations
we seek so W is a consistent approximation. Equation (30) then determines u (2)
to be
a (2) [u] +rp (38)
is the exact pressure gradient for the Stokes equations. We
know a (2) [u] is a pure gradient from Corollary 4. Once u (2)
is known, A (2)
2 can be
determined from (37) and tangential boundary conditions for u (2)
d
are known and
can be used with the equations (32) to determine u (2)
d
. We note that u
is given in (31). Continuing to ignore the alternating terms for the moment,
the pattern to determine the regular and boundary layer terms is the following:
1. If u (p)
d
is known, u (p+2)
can be determined from the O(h p+2 ) g expansion (i.e.
is determined from (33)).
2. u (p+2)
determines the vertical boundary layer at order p and the tangential
boundary conditions for u (p+2)
d
and (through the effect of the tangential
boundary layers) u (p+3)
d
3. u (p+2)
d
can now be determined from the O(h p+4 ) d expansion (i.e. u (2)
d
is determined
from (32)).
In the discrete setting an important technical detail is that a (2)
d
d
example, in the second line of (32) there is no term a (2)
d
[u (2)
d
]. This is guaranteed
by Corollary 4. This separated determination of the gradient and divergence-free
components of the error expansion for the space continuous analysis is implicitly
present in [9] but not clearly laid out. This technique easily allows for the implicit
handling of the convection terms, for instance, which is avoided in [9].
We turn our attention now to the alternating errors. Equation (35) implies
- u (3)
d
d
- u (3)
In fact, 1-
a (3)
d
u (3) is a pure gradient-*, we can
use Lemma 5 to justify the missing error terms from P h -
u (3) in (32) and (33).
Higher order alternating error terms are determined statically like (39) and (40)
from alternating errors from the projection of lower order terms. An alternating
divergence-* error appears at fifth order. 2
We now turn to the expansions for U and P .
Theorem 7 The computed
W has an error expansion with regular errors at
second order, alternating errors at third order and no boundary layers. The computed
pressure has alternating errors and boundary layers at first order and regular errors at
second order.
Proof: We take P h of (27). The boundary layers are removed (they were so constructed)
and the following results:
d
d
This verifies the first part of the Theorem. Now asymptotically
by (27) minus (41) and results in
a (2) [u])
with a nonzero fourth order regular error a (4) . When (38), (31) and (40) are used
this becomes
The corresponding scalar has an expansion
(recall how boundary layers scale from Lemma 1). Since
we have chosen the convenient scaling for the analysis, we see that
This verifies the second claim of the Theorem. 2
Regularizations
The alternating errors were generated by the uncoupled stencil for D h and G h . Following
[17] we can use higher order regularizing terms (with corrections at the boundary) in D h
and G h to eliminate these alternating errors. A projection operator based on this idea
is described in [20]. We consider the scheme (10)-(12) with this regularized projection:
Theorem 8 The computed velocities W from the spatially regularized scheme
have an error expansion with regular errors at second order, numerical boundary layers
at third order and no alternating errors. The computed pressure has boundary layers at
first order, regular errors at second order and no alternating errors.
Here, numerical boundary layers that occur due to the wide stencil of D h \Delta G h do enter
the computed velocities U. This theorem can be proven using the asymptotic error
description of the regularized projection in [20] following the general framework of the
proof of Theorem 6. We omit the technical details. The presence of the dominant
boundary layer error in the computed pressure for this scheme and the suppression of the
alternating errors can be seen in Fig. 4 (compare to Fig. 2 for Chorin's original scheme
with the same h and k).
Using the regularized D h and G h as discussed above, we can further eliminate the
dominant boundary layer errors in the pressure by using a non-split-step scheme:
As shown in the theorem below, this scheme suppresses the numerical boundary layers
from the projection step.
Table
2: Normalized pointwise pressure errors e p (and estimated convergence rates in h)
from the fully regularized scheme.
Theorem 9 The computed velocities from the scheme (42),(43) with spatially regularized
D h and G h have an error expansion with regular errors at second order, numerical
boundary layers at fourth order and no alternating errors. The computed pressure has
regular errors at second order, numerical boundary layers at third order and no alternating
errors.
Again, we omit the technical details. We note that the scheme (42),(43) requires the
solution of a coupled system for U n+1 and P n+1 . As mentioned in the introduction, it
would be more computationally efficient to use a pressure increment scheme [18, 6] to
suppress the numerical boundary layers, but the analysis of this approach is not fully
understood in the discrete setting.
Second order convergence for the pressure from the fully regularized scheme is shown
in
Table
2 using the same parameters as the convergence study from Section 3 (compare
Table
1). The smooth nature of the errors in computed pressure is shown in Fig. 5.
6
Summary
We have presented an error analysis for Chorin's original fully discrete method for computing
the incompressible Navier-Stokes equations. The velocities from this scheme converge
with full order O(k)+O(h 2 ). The computed pressures have O(h) global alternating
errors due to the uncoupled approximation used for the incompressibility condition and
layers due to the split-step projection step. These errors can
be removed by using a regularized stencil to approximate the incompressibility condition
and a non-split-step time integration procedure.
--R
"Derivation and solution of the discrete pressure equations for the incompressible Navier Stokes equations,"
Spectral Methods in Fluid Dynamics (Section 11.3)
"Numerical solution of the Navier-Stokes equations,"
"On the convergence of discrete approximations to the Navier-Stokes equations,"
"A Mathematical Introduction to Fluid Dynamics,"
"A second-order projection method for the incompressible Navier-Stokes equations,"
"An efficient second-order projection method for viscous incompressible flow,"
Finite Element Methods for Navier-Stokes Equations
"Projection Method I: Convergence and Numerical boundary Layers,"
"Projection Method II: Rigorous Godunov-Ryabenki Analysis,"
"Second Order Convergence of a Projection Scheme for the Incompressible Navier-Stokes Equations with Boundaries,"
"Discrete Compatibility in Finite Difference Methods for Viscous Incompressible Flow,"
"Application of a fractional-step method to incompressible Navier-Stokes equations,"
"boundary conditions for incompressible flows,"
"On Chorin's projection method for the incompressible Navier-Stokes equations,"
"On error estimates of projection methods for Navier-Stokes equations: first order schemes,"
"High-Order Accurate Schemes for Incompressible Viscous Flow,"
"A second-order accurate pressure-correction scheme for viscous incompressible flow,"
"Finite Difference Vorticity Methods"
"Analysis of the spatial error for a class of finite difference methods for viscous incompressible flow,"
--TR
--CTR
Robert D. Guy , Aaron L. Fogelson, Stability of approximate projection methods on cell-centered grids, Journal of Computational Physics, v.203 n.2, p.517-538, 1 March 2005
Weinan , Jian-Guo Liu, Projection method III: spatial discretization on the staggered grid, Mathematics of Computation, v.71 n.237, p.27-47, January 2002 | parasitic modes;numerical boundary layers;projection methods |
270656 | First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity. | Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H1 product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are naturally uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity, where we obtain the more substantive result that the estimates are uniform in the Poisson ratio. | Introduction
. In earlier work [10], [11], we developed least-squares functionals for
a first-order system formulation of general second-order elliptic scalar partial differential
equations. The functional developed in [11] was shown to be elliptic in the sense that its
homogeneous form applied to the (pressure and velocities) is equivalent to
the norm. This means that the individual variables in the functional are essentially
decoupled (more precisely, their interactions are essentially subdominant). This important
property ensures that standard finite element methods are of H 1 -optimal accuracy in each
variable, and that multiplicative and additive multigrid methods applied to the resulting
discrete equations are optimally convergent.
The purpose of this paper is to extend this methodology to the Stokes equations in two
and three dimensions. To this end, we begin by reformulating the Stokes equations as a
first-order system derived in terms of an additional vector variable, the velocity flux, defined
as the vector of gradients of the Stokes velocities. We first apply a least-squares principle
to this system using L 2 and H \Gamma1 norms weighted appropriately by the Reynolds number,
Re. We then show that the resulting functional is elliptic in a product norm involving Re
and the L 2 and H 1 norms. While of theoretical interest in its own right, we use this result
here primarily as a vehicle for establishing that a modified form of this functional is fully
elliptic in an H 1 product norm scaled by Re.
This appears to be the first general theory of this kind for the Stokes equations in
general dimensions with velocity boundary conditions. Bochev and Gunzburger [6] developed
least-squares functionals for Stokes equations in norms that include stronger Sobolev
terms and mesh weighting, but none are product H 1 elliptic. Chang [13] also used velocity
derivative variables to derive a product H 1 elliptic functional for Stokes equations, but it is
inherently limited to two dimensions. For general dimensions, a vorticity-velocity-pressure
Center for Applied Mathematical Sciences, Department of Mathematics, University of Southern Cali-
fornia, 1042 W. 36th Place, DRB 155, Los Angeles, CA 90089-1113. email: zcai@math.usc.edu
y Program in Applied Mathematics, Campus Box 526, University of Colorado at Boulder, Boulder,
CO 80309-0526. email: tmanteuf@boulder.colorado.edu and stevem@boulder.colorado.edu. This work was
sponsored by the Air Force Office of Scientific Research under grant number AFOSR-91-0156, the National
Science Foundation under grant number DMS-8704169, and the Department of Energy under grant number
DE-FG03-93ER25165.
form (cf.[4] and [20]) proved to be product H 1 elliptic, but only for certain nonstandard
boundary conditions. For the more practical (cf. [17], [22], and [25]) velocity boundary
conditions treated here, the velocity-vorticity-pressure formulation examined by Chang [14]
can be shown by counterexample [3] not to be equivalent to any H 1 product norm, even
with the added boundary condition on the normal component of vorticity. Moreover, this
formulation admits no apparent additional equation, such as the curl and trace constraints
introduced below for our formulation, that would enable such an equivalence. The velocity-
pressure-stress formulation described in [7] has the same shortcomings. (If the vorticity
and deformation stress variables are important, then they can be easily and accurately
reconstructed from the velocity-flux variables introduced in our formulation.)
While our least-squares form requires several new dependent variables, we believe that
the added cost is more than offset by the strengthened accuracy of the discretization and the
speed that the attendant multigrid solution process attains. Moreover, while our modified
functional requires strong regularity conditions, this is to be expected for obtaining full
product H 1 ellipticity in all variables, including velocity fluxes. (We thus obtain optimal
estimates for the derivatives of velocity.) In any case, strengthened regularity is not
necessary for the first functional we introduce.
Our modified Stokes functional is obtained essentially by augmenting the first-order
system with a curl constraint and a scalar (trace) equation involving certain derivatives of
the velocity flux variable, then appealing to a simple L 2 least-squares principle. As in [11] for
the scalar case, the important H 1 ellipticity property that we establish guarantees optimal
finite element accuracy and multigrid convergence rates applied to this Stokes least-squares
functional that are uniform in Re.
One of the more compelling benefits of least squares is the freedom to incorporate
additional equations and impose additional boundary conditions as long as the system is
consistent. In fact, many problems are perhaps best treated with overdetermined (but
first-order systems, as we have here for Stokes. We therefore abandon the so-called
ADN theory (cf. [1] [2]), which is restricted to square systems, in favor of more direct
tools of analysis.
An important aspect of our general formulation is that it applies equally well to the
Dirichlet problem for linear elasticity. This is done by posing the Stokes equations in a
slightly generalized form that includes a pressure term in the continuity equation. Our
development and results then automatically apply to linear elasticity. Most important, our
optimal discretization and solver estimates are uniform in the Lam'e constants.
We emphasize that the discretization and algebraic convergence properties for the generalized
Stokes equations are automatic consequences of the H 1 product norm ellipticity
established here and the finite element and multigrid theories established in Sections 3-5
of [11]. We are therefore content with an abbreviated paper that focuses on establishing
ellipticity, which we do in Section 3. Section 2 introduces the generalized Stokes equations,
the two relevant first-order systems and their functionals, and some preliminary theory.
Concluding remarks are made in Section 4.
2. The Stokes Problem, Its First-Order System Formulation, and Other
Preliminaries.
Let\Omega be a bounded, open, connected domain in ! n
Lipschitz boundary @
\Omega\Gamma The pressure-perturbed form of the generalized stationary Stokes
equations in dimensionless variables may be written as
\Gamma- \Delta u +r
r
where the symbols \Delta, r, and r\Delta stand for the Laplacian, gradient, and divergence operators,
- is the reciprocal of the Reynolds number Re; f is a given vector function; g
is a given scalar function; and ffi is some nonnegative constant
linear elasticity). Without loss of generality, we may assume that
Z
\Omega
Z
\Omega
equation (2.1) can have a solution only when g satisfies (2.2), and we are then
free to ask that p satisfy (2.2). For ffi ? 0, in general we have only that
R\Omega
but this can be reduced to (2.2) simply by replacing p by
and g by 0 in (2.1).)
We consider the (generalized) Stokes equations (2.1) together with the Dirichlet velocity
boundary condition
The slightly generalized Stokes equations in (2.1) allow our results to apply to linear
elasticity. In particular, consider the Dirichlet problem
where u now represents displacements and - and - are the (positive) Lam'e constants. By
here we mean the n-vector of components \Delta u i , that is, \Delta applies to u componentwise.
This is recast in form (2.1)-(2.2) by introducing the pressure variable 1
rescaling f , and by letting
is easy to see that this p
must satisfy (2.2).) An important consequence of the results we develop below is that
standard Rayleigh-Ritz discretization and multigrid solution methods can be applied with
optimal estimates that are uniform in h, -, and -. For example, we obtain optimal uniform
approximation of the gradients of displacements in the H 1 product norm. This in turn
implies analogous H 1 estimates for the stresses, which are easily obtained from the "velocity
fluxes". For related results with a different methodology and weaker norm estimates, see
[16].
Let curl j r\Theta denote the curl operator. (Here and henceforth, we use notation for
the case consider the special case in the natural way by identifying
with the if u is two dimensional, then the curl of u means the
scalar function
r\Theta
1 Perhaps a more physical choice for this artificial pressure would have been
r \Delta u, since it then
becomes the hydrostatic pressure in the incompressible limit. We chose our particular scaling because it
most easily conforms to (2.1). In any case, our results apply to virtually any nonnegative scaling of p, with
no effect on the equivalence constants (provided the norms are correspondingly scaled); see Theorems 3.1
and 3.2.
where u 1 and u 2 are the components of u.) The following identity is immediate:
r\Theta (r\Theta
interpreted as
r ? is the formal adjoint of r\Theta defined by
We will be introducing a new independent variable defined as the n 2 -vector function
of gradients of the u i , It will be convenient to view the original n-vector
functions as column vectors and the new n 2 -vector functions as either block column vectors
or matrices. Thus, given
and denoting u then an operator G defined on scalar functions (e.g.,
r) is extended to n-vectors componentwise:
and
If U i j Gu i is a n-vector function, then we write the matrix
U
U n1 U
We then define the trace operator tr according to
tr
If D is an operator on n-vector functions (e.g., its extension to matrices is
defined by
When each DU i is a scalar function (e.g., then we will want to view the extension
as a mapping to column vectors, so we will use the convention
We also extend the tangential operator n\Theta componentwise:
Finally, inner products and norms on the matrix functions are defined in the natural componentwise
way, e.g.,
Introducing the velocity flux variable
then the Stokes system (2.1) and (2.3) may be recast as the following equivalent first-order
system:
r
Note that the definition of U, the "continuity" condition r \Delta in \Omega\Gamma and the
Dirichlet condition
@\Omega imply the respective properties
r\Theta
Then an equivalent extended system for (2.6) is
r
r\Theta
Let
D(\Omega\Gamma be the linear space of infinitely differentiable functions with compact support
on\Omega and let D 0
(\Omega\Gamma denote the dual space of
D(\Omega\Gamma0 The duality pairing between D
D(\Omega\Gamma is denoted by ! \Delta; \Delta ?. We use the standard notation and definition for the Sobolev
spaces H s
the standard associated inner products are denoted
by (\Delta; \Delta)
s;\Omega and (\Delta; \Delta) s;
@\Omega , and their respective norms by k \Delta k
s;\Omega and k \Delta k s;
@\Omega . (We suppress
the superscript n because dependence of the vector norms on dimension will be clear by
context. We also
omit\Omega from the inner product and norm designation when there is no
risk of confusion.) For
coincides with L
. In this case, the norm and
inner product will be denoted by k \Delta k and (\Delta; \Delta), respectively. As usual, H s
0(\Omega\Gamma is the closure
of
with respect to the norm k \Delta k s and H
\Gammas(\Omega\Gamma is its dual with norm defined by
Define the product spaces H s
\Gammas(\Omega\Gamma with standard
product norms. Let
and
which are Hilbert spaces under the respective norms
and
@\Omega
where denote the respective
unit vectors normal and tangent to the boundary. Finally, define
Z
\Omega
It is well-known that the (weak form of the) boundary value problem (2.1)-(2.2) has a
unique solution (u;
n \Theta L 2
(e.g., see
[21, 22, 17]). Moreover, if the boundary of the
domain\Omega is C or a convex polyhedron,
then the following H 2 -regularity result holds:
(We use C with or without subscripts in this paper to denote a generic positive constant,
possibly different at different occurrences, that is independent of the Reynolds number and
other parameters introduced in this paper, but may depend on the
domain\Omega or the constant
.) Bound (2.9) is established for the case the case for general
and the case ffi ? 0 follows from the well-known linear
elasticity bound kuk 2 is the (unscaled) source term in (3.19) and
oe is the stress tensor. We will need (2.9) to establish full H 1 product ellipticity of one of
our reformulations of (2.1)-(2.2); see Theorem 3.2.
The following lemma is an immediate consequence of a general functional analysis result
due to Ne-cas [24] (see also [17]).
Lemma 2.1. For any p in L 2
, we have
Proof. See [24] for a general proof.
A curl result analogous to Green's theorem for divergence follows from [17] (Theorem
2.11 in Chapter I):
(r\Theta z;
Z
@\Omega
ds
for z 2 H(curl ; \Omega\Gamma and OE 2 H
Finally, we summarize results from [17] that we will need for G 2 in the next section. The
first inequality follows from Theorems 3.7-3.9 in [17], while the second inequality follows
from Lemmas 3.4 and 3.6 in [17].
Theorem 2.1. Assume that the
domain\Omega is a bounded convex polyhedron or has C
boundary. Then for any vector function v in either H 0 (div;
If, in addition, the domain is simply connected, then
kr
3. First-Order System Least Squares. In this section, we consider least-squares
functionals based on system (2.6) and its extension (2.8). Our primary objective here is to
establish ellipticity of these least-squares functionals in the appropriate Sobolev spaces.
Our first least-squares functional is defined in terms of appropriate weights and norms
of the residuals for system (2.6):
Note the use of the H \Gamma1 norm in the first term here. Our second functional is defined as a
weighted sum of the L 2 norms of the residuals for system (2.8):
Let
n \Theta L 2
n \Theta (H
where
@\Omega g:
Note that V 2 ae V 1 . For 2, the first-order system least-squares variational problem
for the Stokes equations is to minimize the quadratic functional G i (U; u;
find (U; u; p) 2 V i such that
Theorem 3.1. There exists a constant C independent of - such that for any (U; u; p) 2
and
Proof. Upper bound (3.5) is straightforward from the triangle and Cauchy-Schwarz
inequalities. We proceed to show the validity of (3.4) for (U; u; p)
g. Then (3.4) would follow for (U; u; p) 2 V 1 by continuity. For
any (U; u; p)
n , we have
(r
Hence, by Lemma 2.1, we have
From (3.6) and the Poincar'e-Friedrichs inequality on u we have
Using the "-inequality, 2ab - 1
for the first two products yields
Again from (3.6) and the Poincar'e-Friedrichs inequality on u we have
Using the "-inequality on the first three products and (3.7), we then have
Again using the "-inequality, we find that
Using (3.8) in (3.6) and (3.7), we now have that
The theorem now follows from these bounds, (3.8), and the Poincar'e-Friedrichs inequality
on u.
The next two lemmas will be useful in the proof of Theorem 3.2.
Lemma 3.1. (Poincar'e-Friedrichs-type inequality) Suppose that the assumptions of
Theorem 2.1 hold. Let p 2 H
where C depends only
on\Omega . Further, let q 2 (H 1
kr
where C depends only
on\Omega .
Proof. Equation
R
at some point in \Omega\Gamma The first result now
follows from the standard Poincar'e-Friedrichs inequality. The second result follows from
the fact that
R
Lemma 3.2. Under the assumptions of Theorem 2.1 with simply
connected\Omega , for any
(\Omega\Gamma we have:
2(\Omega\Gamma and each OE
is such that \DeltaOE i 2 L
@\Omega , then
jr
jr
2(\Omega\Gamma and each
@\Omega ,
then
jr
jr
Proof. The assumptions of Theorem 2.1 are sufficient to guarantee H 2 -regularity
of the Laplace equation on \Omega\Gamma that is, the second inequality in the equation
Note that tr (r ? OE OE. Then, from the above and the triangle inequality, we
have
jr
jr
which is (3.11).
applied to each column of r\Theta\Phi
imply that
since each OE i is divergence free. Eqn. (3.12) now follows from the triangle inequality as for
the case 2.
Theorem 3.2. Assume that the
domain\Omega is a bounded convex polyhedron or has C
boundary and that regularity bound (2.9) holds. Then, there exists a constant C independent
of - such that for any (U; u;
and
Proof. Upper bound (3.14) is straightforward from the triangle and Cauchy-Schwarz
inequalities. To prove (3.13), note that the H \Gamma1 norm of a function is always bounded by
its Hence, by Theorem 3.1, we have
From Theorem 2.1 and (3.9), we have
It thus suffices to show that
We will prove (3.17) only for the case because the proof for First
we assume that the
domain\Omega is simply connected. Since n \Theta U = 0 on @ the following
decomposition is admitted (see Theorems 3.4 in [17]):
and \Phi is columnwise divergence free with n \Theta (r\Theta
@
Here we choose q to satisfy
By taking the curl of both sides of this decomposition, it is easy to see that
(3.
so that k\Delta \Phik is bounded and Lemma 3.2 applies. Hence,
kr\Theta Uk 2
(by equation (3.18))
(by Lemma 3.2)
assumption (2.9) with
(by equation (3.18))
This proves (3.17) and, hence, the theorem for simply
connected\Omega\Gamma The proof for
general\Omega\Gamma that is, when we assume only that
@\Omega is C 1;1 , now follows by
an argument similar to the proof of Theorem 3.7 in [17].
We now show that the last two terms in the definition of G 2 are necessary for the
bound (3.13) to hold, even with the extra boundary condition n \Theta U = 0. We consider the
Stokes equations, so that first that we omit the term kr\ThetaUk 2 but include
the term krtr Uk 2 . We offer a two-dimensional counterexample; a three-dimensional
counterexample can be constructed in a similar manner. Let
Choose any ! 2
D(\Omega\Gamma such that \Deltar! 6= 0 and define
Clearly, n \Theta U = 0. It is easy to show that
r
However,
(r\Theta
by construction. Thus,
which cannot bound kUk 2
1 . That is, since
D(\Omega\Gamma is arbitrary, we may choose it so
oscillatory that kUk 1 =kUk is as large as we like. This prevents the bound (3.13) from
holding.
Next suppose we include the kr\Theta Uk 2 term but omit the krtr Uk 2 term. Now set
choose q i to satisfy
2. Then
We also know that
where C is independent of k. Now set
2. Then n \Theta U
where C is independent of k. On the other hand, we have
which again prevents the bound (3.13) from holding.
4. Concluding Remarks. Full regularity assumption (2.9) is needed in Theorem 3.2
only to obtain full H 1 product ellipticity of augmented functional G 2 in (3.2). This somewhat
restrictive assumption is not necessary for functional G 1 in (3.1), which supports
an efficient practical algorithm (the H \Gamma1 norm in (3.1) can be replaced by a discrete inverse
norm or a simpler mesh weighted norm; see [5] and [8] for analogous inverse norm
algorithms) and which has the weaker norm equivalence assured by Theorem 3.1.
Nevertheless, the principal result of this paper is Theorem 3.2, which establishes full H 1
product ellipticity of least-squares functional G 2 for the generalized Stokes system. Since we
have assumed full H 2 -regularity of the original Stokes (linear elasticity) equations, we may
then use this result to establish optimal finite element approximation estimates and optimal
multiplicative and additive multigrid convergence rates. This can be done in precisely the
same way that these results were established for general second-order elliptic equations (see
[11], Sections 3-5). We therefore omit this development here. However, it is important to
recognize that the ellipticity property is independent of the Reynolds parameter - (Lam'e
constants - and -). This automatically implies that the optimal finite element discretization
error estimates and multigrid convergence factor bounds are uniform in - and -). At
first glance, it might appear that the scaling of some of the H 1 product norm components
might create a scale dependence of our discretization and algebraic convergence estimates.
However, the results in [11] are based only on assumptions posed in an unscaled H 1 product
norm, in which the individual variables are completely decoupled; and since the constant -
appears only as a simple factor in individual terms of the scaled H 1 norm, these assumptions
are equally valid in this case. On the other hand, for problems where the necessary H 1
scaling is not (essentially) constant, extension of the theory of Section 3-5 of [11] is not
straightforward. Such is the case for convection-diffusion equations, which will be treated
in a forthcoming paper.
--R
Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions II
Analysis of least-squares finite element methods for the Navier-Stokes equations
Accuracy of least-squares methods for the Navier-Stokes equations
Analysis of least-squares finite element methods for the Stokes equations
A least-squares approach based on a discrete minus one inner product for first order system
On the existence
Schwarz alternating procedure for elliptic problems discretized by least-squares mixed finite elements
A mixed finite element method for the Stokes problem: an acceleration-pressure formulation Appl
An error estimate of the least squares finite element methods for the Stokes problem in three dimensions Math.
The Finite Element Method for Elliptic Problems
analysis of some Galerkin least squares methods for the linear elasticity equationns
Finite Element Methods for
Elliptic Problems in Nonsmooth Domains
Finite Element Methods for Viscous Incompressible Flows
Theoretical study of the incompressible Navier-Stokes equations by the least-squares method
A regularity result for the Stokes problem in a convex polygon
The Mathematical Theory of Viscous Incompressible Flow
Variational multigrid theory
--TR
--CTR
Hongxing Rui , Seokchan Kim , Sang Dong Kim, A remark on least-squares mixed element methods for reaction-diffusion problems, Journal of Computational and Applied Mathematics, v.202 n.2, p.230-236, May, 2007
Suh-Yuh Yang, Analysis of piecewise linear approximations to the generalized stokes problem in the velocity-stress-pressure formulation, Journal of Computational and Applied Mathematics, v.147 n.1, p.53-73, 1 October 2002
M. M. J. Proot , M. I. Gerrtisma, Least-squares spectral elements applied to the Stokes problem, Journal of Computational Physics, v.181 n.2, p.454-477, 20 September 2002
On the velocity-vorticity-pressure least-squares finite element method for the stationary incompressible Oseen problem, Journal of Computational and Applied Mathematics, v.182 n.1, p.211-232, 1 October 2005
Yves Tourigny, The Optimisation of the Mesh in First-Order Systems Least-Squares Methods, Journal of Scientific Computing, v.24 n.2, p.219-245, August 2005
J. P. Pontaza , J. N. Reddy, Space-time coupled spectral/hp least-squares finite element formulation for the incompressible Navier-Stokes equations, Journal of Computational Physics, v.197 n.2, p.418-459, 1 July 2004
J. P. Pontaza , J. N. Reddy, Spectral/hp least-squares finite element formulation for the Navier-Stokes equations, Journal of Computational Physics, v.190 n.2, p.523-549, 20 September
Youngmi Choi , Hyung-Chun Lee , Byeong-Chun Shin, A least-squares/penalty method for distributed optimal control problems for Stokes equations, Computers & Mathematics with Applications, v.53 n.11, p.1672-1685, June, 2007
P. Bolton , R. W. Thatcher, On mass conservation in least-squares methods, Journal of Computational Physics, v.203 n.1,
Sang Dong Kim , Byeong Chun Shin, H-1 least-squares method for the velocity-pressure-stress formulation of Stokes equations, Applied Numerical Mathematics, v.40 n.4, p.451-465, March 2002
J. J. Heys , T. A. Manteuffel , S. F. McCormick , J. W. Ruge, First-order system least squares (FOSLS) for coupled fluid-elastic problems, Journal of Computational Physics, v.195
J. A. Sethian , Jon Wilkening, A numerical model of stress driven grain boundary diffusion, Journal of Computational Physics, v.193 n.1, p.275-305, January 2004
J. J. Heys , T. A. Manteuffel , S. F. McCormick , L. N. Olson, Algebraic multigrid for higher-order finite elements, Journal of Computational Physics, v.204 | stokes equations;multigrid;least squares |
270671 | On Krylov Subspace Approximations to the Matrix Exponential Operator. | Krylov subspace methods for approximating the action of matrix exponentials are analyzed in this paper. We derive error bounds via a functional calculus of Arnoldi and Lanczos methods that reduces the study of Krylov subspace approximations of functions of matrices to that of linear systems of equations. As a side result, we obtain error bounds for Galerkin-type Krylov methods for linear equations, namely, the biconjugate gradient method and the full orthogonalization method. For Krylov approximations to matrix exponentials, we show superlinear error decay from relatively small iteration numbers onwards, depending on the geometry of the numerical range, the spectrum, or the pseudospectrum. The convergence to exp$(\tau A)v$ is faster than that of corresponding Krylov methods for the solution of linear equations $(I-\tau A)x=v$, which usually arise in the numerical solution of stiff ordinary differential equations (ODEs). We therefore propose a new class of time integration methods for large systems of nonlinear differential equations which use Krylov approximations to the exponential function of the Jacobian instead of solving linear or nonlinear systems of equations in every time step. | Introduction
. In this article we study Krylov subspace methods for the approximation
of exp(-A)v when A is a matrix of large dimension, v is a given vector,
scaling factor which may be associated with the step size in a time integration
method. Such Krylov approximations were apparently first used in Chemical
Physics [20, 22, 17] and were more recently studied by Gallopoulos and Saad [10, 24];
see also their account of related previous work. They present Krylov schemes for exponential
propagation, discuss the implementation, report excellent numerical results,
and give some theoretical error bounds. As they also mention, these bounds are however
too pessimistic to explain the numerically observed error reductions. Moreover,
their error bounds do not make evident that - or when and why - Krylov methods
perform far better than standard explicit time stepping methods in stiff problems. A
further open question concerns the relationship between the convergence properties of
Krylov subspace methods for exponential operators and those for the linear systems
of equations arising in implicit time integration methods. In the present paper we
intend to clear up the error behavior.
When we wrote this paper, we were unaware of the important previous work by
Druskin and Knizhnerman [3, 4, 14, 15] who use a different approach to the analysis.
We will comment on the relationship of some of their results to ours in a note at the
end of this paper.
Our error analysis is based on a functional calculus of Arnoldi and Lanczos methods
which reduces the study of approximations of exp(-A)v to that of the corresponding
iterative methods for linear systems of equations. Somewhat oversimplified, it
may be said that the error of the mth iterate for exp(-A)v behaves like the minimum,
Mathematisches Institut, Universit?t T-ubingen, Auf der Morgenstelle 10, D-72076 T-ubingen,
Germany. E-mail: marlis@na.uni-tuebingen.de, lubich@na.uni-tuebingen.de
taken over all ff ? 0, of e ff multiplied with the error of the mth iterate for the solution
of by the same Krylov subspace method. This minimum is usually
attained far from especially for large iteration numbers. Unless a good preconditioner
for I \Gamma -A is available, the iteration for exp(-A)v converges therefore faster
than that for v. We do not know, however, of a way to "precondition"
the iteration for exp(-A)v .
Gallopoulos and Saad showed that the error of the mth iterate in the approximation
of exp(-A)v has a bound proportional to k-Ak m =m!, which gives superlinear
convergence for m AE k-Ak. In many cases, however, superlinear error decay begins
for much smaller iteration numbers. For example, we will show that for symmetric
negative definite matrices A this occurs already for m -
k-Ak, whereas for skew-Hermitian
matrices with uniformly distributed eigenvalues substantial error reduction
begins in general only for m near k-Ak. We will obtain rapid error decay for m- k-Ak
also for a class of sectorial, non-normal matrices. Convergence within the required tolerance
ensures that the methods become superior to standard explicit
time stepping methods for large systems. For m - k-Ak, our error bounds improve
upon those of [10] and [24] typically by a factor 2 \Gammam e \Gammack- with a c ? 0. The analysis
explains how the error depends on the geometry of critical sets in the complex
plane, namely the numerical range of A for Arnoldi-based approximations, and the
location of the spectra or pseudospectra of A and the Krylov-Galerkin matrix Hm for
both Lanczos- and Arnoldi-based approximations. In our framework, it is also easily
seen that clustering of eigenvalues has similar beneficial effects in the Krylov subspace
approximation of exp(-A)v as in the iterative solution of linear systems of equations.
As mentioned above, exp(-A)v can often be computed faster than
Krylov subspace methods. This fact has implications in the time integration of very
large systems of ordinary differential equations arising, e.g., in many-particle simulations
and from spatial discretizations of time-dependent partial differential equations.
It justifies renewed interest in ODE methods that use the exponential or related functions
of the Jacobian instead of solving linear or nonlinear systems of equations in
every time step. Methods of this type in the literature include the exponential Runge-Kutta
methods of Lawson [18] and Friedli [8], the adaptive Runge-Kutta methods of
Strehmel and Weiner [27] in their non-approximated form, exponentially fitted methods
of [5], and the exponential multistep methods of [9].
In the last section of this paper, we propose a promising new class of "exponen-
tial" integration methods. With Krylov approximations, substantial savings can be
expected for large, moderately stiff systems of ordinary differential equations which
are routinely solved by explicit time-stepping methods despite stability restrictions of
the step size, or when implicit methods require prohibitively expensive Jacobians and
linear algebra.
The paper is organized as follows: In Section 2, we describe the general framework
and derive a basic error bound for the Arnoldi method. In Section 3, this leads to
specific error bounds for the approximation of exp(-A)v for various classes of matrices
A. Lanczos methods are studied in Section 4, which contains also error bounds for
BiCG and FOM. In Section 5, we introduce a class of time-stepping methods for large
systems of ODEs which replace the solution of linear systems of equations by multiplication
with '(-A), where whose Krylov subspace approximations
converge as fast as those for exp(-A)v.
Throughout the paper, k \Delta k is the Euclidean norm or its induced matrix norm.
2. Arnoldi-based approximation of functions of matrices. In the sequel,
let A be a complex square matrix of (large) dimension N , and v 2 C N a given
vector of unit length, 1. The Arnoldi process generates an orthonormal basis
of the Krylov space
matrix Hm of dimension m (which is the upper left part of its successor Hm+1 ) such
that
where e i is the ith unit vector in R m . By induction this clearly implies, as noted in
[3, Theorem 2] and [24, Lemma 3.1],
A standard use of the Arnoldi process is in the solution of linear equations [23], where
one approximates
when - is not an eigenvalue of A, and hopefully not of Hm . The latter condition is
always satisfied when - is outside the numerical range
since (2.1) implies
m AVm and therefore
We now turn to the approximation of functions of A. Let f be analytic in a neighborhood
of F(A). Then
Z
where \Gamma is a contour that surrounds F(A). In view of (2.3), we are led to replace this
Z
so that we approximate
Such an approximation was proposed previously [22, 33, 3, 10], with different derivations
In practice, we are then left with the task of computing the lower-dimensional
expression f(Hm )e 1 , which for m - N is usually much easier to compute than f(A)v,
e.g., by diagonalization of Hm . The above derivation of (2.7) also indicates how to
obtain error bounds: Study the error in the Arnoldi approximation (2.3) of linear
systems and integrate their error bounds, multiplied with jf(-)j, for - varying along
a suitable contour \Gamma. This will actually be done in the present paper.
Our error bounds are based on Lemma 1 below. To prepare its setting, let E be a
convex, closed bounded set in the complex plane. Let OE be the conformal mapping
that carries the exterior of E onto the exterior of the unit circle fjwj ? 1g, with
We note that ae is the logarithmic capacity
of E. Finally, let \Gamma be the boundary curve of a piecewise smooth, bounded region G
that contains E, and assume that f is analytic in G and continuous on the closure of
G.
Lemma 1. Under the above assumptions, and if the numerical range of A is contained
in E, we have for every polynomial q m\Gamma1 of degree at most
Z
with is the length of the boundary curve @E of
E, and where d(S) is the minimal distance between F(A) and a subset S of the complex
plane. If E is a straight line segment or a disk, then (2:8) holds with
Remark. It will be useful to choose the integration contour dependent on m, in order
to balance the decay of OE \Gammam away from E against the growth of f outside E. For
functions f , such as the exponential function studied in detail below, this will
ultimately yield superlinear convergence. On the other hand, the liberty in choosing
the polynomial q m\Gamma1 will not materialize in the study of the exponential function.
Proof. (a) We begin by studying the error of (2.3) and consider a fixed
the moment. Our argumentation in this part of the proof is inspired by [25]. Using
m v, we rewrite the error as
with
. By (2.1) and the orthogonality of
Hence we have for arbitrary y
We note that
is a polynomial of degree - m with Conversely, for every such
(A)v is of the above form. To bound \Delta m , we recall kVm use
the estimates k(-I
which follow from [26, Thm.4.1] and (2.4). We thus obtain
for every polynomial p m of degree at most m with
(b) It remains to bound p m (A). Since
Z
we have
For the special case when E is a line segment, we have A of the form A
with a Hermitian B and complex coefficients ff; fi, so that then
When E is a disk jz \Gamma -j - ae, then p m (z) will be chosen as a multiple of (z
and an inequality of Berger (see [1, p. 3]) then tells us that
In all these cases we thus have
with M as stated in Lemma 1.
(c) To proceed in the proof, we use near-optimality properties of Faber polynomials.
These have been employed previously in analyses of iterative methods by Eiermann
[6] and Nevanlinna [21]. Let OE m (z) denote the Faber polynomial of degree m associated
with the region E. This is defined as the polynomial part of OE(z) m , i.e.,
1. We now choose the polynomial p m (z) with the
normalization
cf. [21, p.76]. A theorem of K-ovari and Pommerenke [16, Thm.2] provides us with the
bound
This implies max z2E jOE m
(d) Combining inequalities (2.10),(2.11), and (2.14) gives us
The proof is now completed by inserting this bound into the difference of formulas
(2.5) and (2.6) and taking account of (2.2).
Remark. Part (c) of the above proof, combined with Cauchy's integral formula, shows
that there exists a polynomial \Pi m\Gamma1 (z) of degree at most m \Gamma 1 such that
Z
where ffi is the minimal distance between \Gamma and E. This holds for \Pi
R
of (2.12). Polynomial approximation
bounds of this type are closely tied to Bernstein's theorem [19, Thm.III.3.19].
3. Approximation of the matrix exponential operator. In this section we
give error bounds for the Arnoldi approximation of e -A v for various classes of matrices
A. We may restrict our attention to cases where the numerical range of A is contained
in the left half-plane, so that ke -A k and, in view of (2.4), also are bounded by
unity. This assumption entails no loss of generality, since a shift from A to A
changes both e -A v and its approximation by a factor e - ff .
The same bounds as in Theorems 2 to 6 below (even slightly more favorable
are valid also for Krylov subspace approximations of '(-A)v, with
Theorem 2. Let A be a Hermitian negative semi-definite matrix with eigenvalues
in the interval [\Gamma4ae; 0]. Then the error in the Arnoldi approximation of e -A v, i.e.,
is bounded in the following ways:
Remark. It is instructive to compare the above error bounds with that of the conjugate
gradient method applied to the linear system which is given by
This bound becomes small for m AE p ae- but only with a linear decay.
Proof. We use Lemma 1 with Then the conformal mapping is
We start by applying the linear transformation to the
interval [\Gamma1; 1]. As contour \Gamma we choose the parabola with right-most point fl that is
mapped to the parabola \Pi given by the parametrization
This parabola osculates to the ellipse E with foci \Sigma1 and major
us the error bound
Z
Z
where \Phi(-+
1. The absolute value of \Phi(-) is constant along every ellipse
with foci \Sigma1. Since the parabola \Pi is located outside the ellipse E , we have along \Pi
Hence we obtain from (3.3)
Z 1e \Gammaae- ' 2
d'
s
Fig. 3.1. Errors and error bounds for the symmetric example
Moreover, we have r - e ff
2ffl with ff ? 0:96 for ffl - 1=2 (and ff ? 0:98 for ffl - 1=4).
Minimizing e 2ae- ffl\Gammam
2ffl with respect to ffl yields
Inserting this ffl in (3.4) results in the bound (with
which together with " m - 2 is a sharper version of (3.1). The condition ffl - 1=2 is
equivalent to m - 2ae- .
To obtain the bound (3.2), we note that 1 insert in (3.4)
which is close to the minimum for m AE ae- . This yields for m - 2ae-
which is a sharper version of (3.2).
Finally we remark, in view of the proof of Theorem 3 below, that the bounds (3.1)
and (3.2) are also obtained when \Gamma is chosen as a composition of the part of the above
parabola contained in the right half-plane and two rays on the imaginary axis.
To give an illustration of our error bounds we consider the diagonal matrix A with
equidistantly spaced eigenvalues in the interval [\Gamma40; 0] and a random unit vector v of
dimension 1001. Fig. 3.1 shows the errors of the approximation to exp(A)v and those
of the cg approximation to nearly a straight line. Moreover, the
dashed line shows the error bounds (3.5) and (3.6), while the dotted line corresponds
to 2k 1
which is the error bound of [24, Corollary 4.6] for symmetric, negative
semi-definite matrices A.
It is well known that Krylov subspace methods for the solution of linear systems of
equations benefit from a clustering of the eigenvalues. The same is true also for the
Krylov subspace approximation of e -A v. This is actually not surprising in view of the
Cauchy integral representations (2.5), (2.6). As an example of such a result, we state
the following theorem. This might be generalized in various directions for different
types of clusterings and different types of matrices, but we will not pursue this further.
Theorem 3. Let A be a Hermitian negative semi-definite matrix with eigenvalues
contained in f- 1 \Gamma4ae. Then the (m 1)st error " m+1 in the
Arnoldi approximation of e -A v is bounded by the right-hand sides of (3:1) and (3:2).
Proof. The result is proved by using the polynomial
instead of (2.12). The absolute value of the first factor is bounded by unity for z 2
[\Gamma4ae; 0] and Re- 0. Hence we obtain the same error bounds as in Theorem 2 with
replaced by m \Gamma 1.
For skew-Hermitian matrices A (with uniformly distributed eigenvalues) we cannot
show superlinear error decay for m ! ae- . The reason is, basically, that here the
conformal mapping OE maps the vertical line contour with jOE(-)j -
in the symmetric negative definite case we have jOE(-)j
ffl. This
behavior affects equally the convergence of Krylov subspace methods for the solution
of linear systems v. For skew-Hermitian A, there is, in general, no
substantial error reduction for m ! ae- , and convergence is linear with a rate like
Theorem 4. Let A be a skew-Hermitian matrix with eigenvalues in an interval on
the imaginary axis of length 4ae. Then the error in the Arnoldi approximation of e -A v
is bounded by
Proof. We use Lemma 1 with the conformal mapping is
After applying the linear transformation we choose the integration
contour as an ellipse with foci \Sigma1 and minor semiaxis
semiaxis is then
and the length of the contour is bounded by 2-a. In
addition, we have 1). The absolute value
constant along the ellipse. With Lemma 1, we get for the error
Fig. 3.2. Errors and error bounds for the skew-Hermitian example
with Inserting gives the stated error bound. A sharper
bound for ae- ? 1
2 , which is obtained by integrating over the parabola that osculates
to the above ellipse at the right-most point, reads
For a numerical illustration we choose the diagonal matrix A with 1001 equidistant
eigenvalues in [\Gamma20i; 20i], and a random vector v of unit length. Fig. 3.2 shows the
errors of the approximation to exp(A)v and those of the BiCG approximation to
nearly a straight line. The dashed line shows the error
bound (3.7), the dotted line corresponds to which is the bound given in
[10, Corollary 2.2].
Theorem 5. Let A be a matrix with numerical range contained in the disk jz
Then the error in the Arnoldi approximation of e -A v is bounded by
Proof. We use Lemma 1 with a circle with radius rae centered
at \Gammaae. Lemma 1 gives the bound
Setting gives the stated result.
The following is a worst-case example which shows nearly no error reduction for m -
ae- .
Example. Let A be the bidiagonal matrix of dimension N that has \Gamma1 on the diagonal
and +1 on the subdiagonal. The numerical range of A is then contained in the disk
The Arnoldi process gives as the m-dimensional version of A,
so that
The error vector thus contains the entries e \Gamma- k =k! for k ? m. The largest of these is
close to (2-) \Gamma1=2 by Stirling's
Similar to Theorem 2, the onset of superlinear convergence begins already for m- ae-
when F(A) is contained in a wedge-shaped set. In particular, consider the conformal
mapping
which maps the exterior of the unit disk onto the exterior of the
bounded sectorial set in the left half-plane
S ' has a corner at 0 with opening angle '- and is symmetric with respect to the real
axis.
Theorem 6. For some ae ? 0 and let the numerical range of A be contained
in ae \Delta S ' . Then the error in the Arnoldi approximation of e -A v is bounded by
ae-
with
1\Gamma' . The constants C and c ? 0 depend only on '.
Proof. In the course of this proof, C denotes a generic constant which takes on
different values on different occurrences. After the transformation -=ae, we use
choose ffi such that r=(1
note that then ae . The
right-most point of the integration contour ae Hence we have from
For m AE (ae- ) ff , the right-hand side is minimized near
Inserting this ffl gives (3.8) with
ff ff
For any r - 2 and ffi ? 0, Lemma 1 gives
e /(r)ae-
which becomes (3.9) upon choosing
4. Lanczos-based approximation of functions of matrices. The Arnoldi
method unfortunately requires long recurrences for the construction of the Krylov
basis. The Lanczos method overcomes this difficulty by computing an auxiliary basis
which spans the Krylov subspace with respect to A and w 1 . The
Lanczos vectors v j and w j are constructed such that they satisfy a biorthogonality
condition, or block biorthogonality in case of the look-ahead version [7, 28], i.e., Dm :=
m Vm is block diagonal. The look-ahead process ensures that Dm is well conditioned
when the index m terminates a block, which will be assumed of m in the sequel.
The Lanczos vectors can be constructed by short (mostly three-term) recurrences.
This results again in a matrix representation (2.1), but now with a block tridiagonal
m AVm . However, unlike the Arnoldi case, neither Vm nor Wm
are orthogonal matrices. It is usual to scale the Lanczos vectors to have unit norm,
in which case the norms of Vm and Wm are bounded by p
m. Since Hm is now an
oblique projection of A, the numerical range of Hm is in general not contained in
F(A). Variants of Lemma 1, which apply in this situation, are given in the following
two lemmas. For the exponential function, Lemmas 7 and 8 lead to essentially the
same error bounds as given for the Arnoldi method in Theorems 5 and 6, except for
different constants. In Theorems 2, 3, and 4, Arnoldi and Lanczos approximations
coincide.
The first lemma works with the ffl-pseudospectrum of A [32], defined by
Otherwise, the setting is again the one described before Lemma 1.
Lemma 7. If then the error of the Lanczos approximation
of f(A)v is bounded by (2:8) with
Proof. The proof modifies the proof of Lemma 1. For the Lanczos process we have
I , and therefore
Noting
m v, we thus obtain
for every polynomial p m of degree - m with 1. By assumption we have that
the norms of both (-I are bounded by fl \Gamma1 for - 2 \Gamma. Using
further kp m (A)k - '(@E)=(2-ffl) \Delta max z2E jp m (z)j leads to
(4.
which in turn yields the estimate stated in the lemma.
For a diagonalizable matrix A we let
where X is the matrix that contains the eigenvectors of A in its columns.
The following lemma involves only the spectrum (A) of A and uses once more
the setting of Lemma 1.
Lemma 8. Let A be diagonalizable and assume that (A) ae E, (H m
\Gamma. Then the Lanczos approximation of f(A)v
satisfies (2:8) with
, where ffi is the minimal
distance between (A) and \Gamma.
Proof. The result follows from (4.1) along the lines of parts (c) and (d) of the proof
of Lemma 1.
Remarks. (a) It is known that in generic situations, extreme eigenvalues of A are well
approximated by those of Hm for sufficiently large m [34]. For a contour \Gamma that is
bounded away from (A), one can thus expect that usually k(-I uniformly
bounded along \Gamma.
(b) Lemmas 7 and 8 apply also to the Arnoldi method, where
and kVm
(c) The convexity assumption about E can be removed at the price of a larger
factor M . For E a continuum containing more than one point, one can use instead of
inequality (2.13) the estimate in the lemma on pp. 107f. in Volume III of [19].
The proofs of Lemmas 1, 7, and 8 provide error bounds for iterative methods for the
solution of linear systems of equations whose iterates are defined by a Galerkin condition
(2.3). This gives new error bounds for the biconjugate gradient method, where the
Krylov basis is constructed via the Lanczos process, and for the full orthogonalization
method, which is based on the Arnoldi process. The proofs can be extended to give
similar error bounds also for the (quasi-) minimization methods QMR and GMRES,
see [13].
5. A class of integration methods for large systems of ODEs. In the
numerical integration of very large stiff systems of ordinary differential equations y
f(y), Krylov subspace methods have been used successfully for the solution of the
linear systems of equations arising in fully or linearly implicit integration schemes [11,
2, 25]. These linear systems are of the form A is the Jacobian
of f evaluated near the current integration point, h is the step size, and fl is a method
parameter. The attraction with Krylov subspace methods lies in the fact that they
require only the computation of matrix-vector products Aw. When it is convenient,
these can be approximated as directional derivatives Aw :
that the Jacobian A need never be formed explicitly. Our theoretical results as well
as computational experiments indicate that Krylov subspace approximations of e flhA v
or '(flhA)v, with
converge faster than the corresponding iterations for v, at least unless
a good preconditioner is at hand. This suggests the use of the following class of
integration schemes, in which the linear systems arising in a linearly implicit method
of Rosenbrock-Wanner type are replaced by multiplication with '(flhA). Starting
from y 0 - y(t 0 ), the scheme computes an approximation y 1 of
s
Here are the coefficients that determine the method. The
internal stages are computed one after the other, with one multiplication
by '(flhA) and a function evaluation at each stage. The simplest method of this type
is the well-known exponentially fitted Euler method
which is of order 2 and exact for linear differential equations y
A and b. It appears well suited as a basis for Richardson extrapolation. Here is another
example of such a method:
Theorem 9. The two-stage methods with coefficients
eter),
are of order 3. For arbitrary step
sizes, they provide the exact solution for every linear system of differential equations
with constant matrix A and constant inhomogeneity b.
Proof. Taylor expansion in h of the exact and the numerical solutions shows that
the order conditions up to order 3, which correspond to the elementary differentials
are given by
Here all sums extend from 1 to s, and we have set ff Cf. with the
order conditions for Rosenbrock methods in [12], p.116, which differ from the present
order conditions only in the right-hand side polynomials in fl.
For the right-hand side of the last order condition vanishes, and hence
this condition is automatically satisfied for every two-stage method with
With ff remaining three equations yield the stated
method coefficients. Direct calculation shows that the method applied to y
which is the claimed property.
Remarks. (a) With 3=4, the method satisfies the order condition fi 2 ff 3
which corresponds to the fourth-order elementary differential f 000 (f; f; f ). The order
conditions corresponding to f are satisfied independently of ff,
so that the order condition corresponding to f 00 (f; f 0 f) is then the only fourth-order
condition that remains violated.
(b) For non-autonomous problems y it is useful to rewrite the equation
in autonomous form by adding the trivial equation t taking the Jacobian
e
In particular, the method is then exact for every linear equation of the form y
tc, since this is rewritten as
y
c A
y
which is again a linear system with constant inhomogeneity.
An efficient implementation and higher-order methods are currently under investigation
Note added in the revised version. After finishing this paper we learned that
Druskin and Knizhnerman [3, 4] previously obtained an estimate similar to (3.5) for
the symmetric case, using a different proof. They give the asymptotic estimate
a
'-
a
a 3
with which they prove using the Chebyshev series expansion of the exponential
function. In an extension of this technique to the non-Hermitian case, Knizhner-
man [14] derived error bounds in terms of Faber series for the Arnoldi method (2.7).
He showed
k=m
are the Faber series coefficients of f and the exponent ff depends on the
numerical range of A. As one referee emphasizes, the Faber series approach could be
put to similar use as our Lemma 1. In fact, Leonid Knizhnerman showed to us in a
personal communication how it would become possible to derive a result of the type
of our Theorem 6 using (5.9). Our approach via Lemma 1 makes it more obvious to
see how the geometry of the numerical range comes into play. An example similar to
that after Theorem 5 is given in [15, x3]. We thank Anne Greenbaum and two referees
for pointing out these references and Leonid Knizhnerman for providing a commented
version of the Russian paper [14]. Error bounds via Chebyshev and Faber series, for
the related problem of approximating matrix functions by methods that generalize
semi-iterative methods for linear systems, were given by Tal-Ezer [29, 30, 31].
Acknowledgement
. We are grateful to Peter Leinen and Harry Yserentant for providing
the initial motivation for this work.
--R
Numerical Ranges of Operators on Normed Spaces and of Elements of Normed Algebras
Two polynomial methods of calculating functions of symmetric matrices
Krylov subspace approximations of eigenpairs and matrix functions in exact and computer arithmetic
Uniform Numerical Methods for Problems with Initial and Boundary Layers
On semiiterative methods generated by Faber polynomials
An implementation of the look-ahead Lanczos algorithm for non-Hermitian matrices
Verallgemeinerte Runge-Kutta Verfahren zur L-osung steifer Differentialgleichungen
A method of exponential propagation of large systems of stiff nonlinear differential equations
Efficient solution of parabolic equations by Krylov approximation methods
Iterative solution of linear equations in ODE codes
Solving Ordinary Differential Equations II
analysis of Krylov methods in a nutshell
Computation of functions of unsymmetric matrices by means of Arnoldi's method
bounds in Arnoldi's method: The case of a normal matrix
Propagation methods for quantum molecular dynamics
Generalized Runge-Kutta processes for stable systems with large Lipschitz con- stants
Theory of Functions of a Complex Variable
New approach to many-state quantum dynamics: The recursive- residue-generation method
Convergence of Iterations for Linear Equations
Unitary quantum time evolution by iterative Lanczos reduction
Krylov subspace methods for solving large unsymmetric linear systems
Analysis of some Krylov subspace approximations to the matrix exponential operator
Analysis of the look-ahead Lanczos algorithm
Spectral methods in time for hyperbolic problems
Spectral methods in time for parabolic problems
Polynomial approximation of functions of matrices and applications
Pseudospectra of matrices
An iterative solution method for solving f(A)x
A convergence analysis for nonsymmetric Lanczos algorithms
--TR
--CTR
P. Novati, An explicit one-step method for stiff problems, Computing, v.71 n.2, p.133-151, October
Vladimir Druskin, Krylov Subspaces and Electromagnetic Oil Exploration, IEEE Computational Science & Engineering, v.5 n.1, p.10-12, January 1998
Ya Yan Lu, Computing a matrix function for exponential integrators, Journal of Computational and Applied Mathematics, v.161 n.1, p.203-216, 1 December
C. Gonzlez , A. Ostermann , M. Thalhammer, A second-order Magnus-type integrator for nonautonomous parabolic problems, Journal of Computational and Applied Mathematics, v.189 n.1, p.142-156, 1 May 2006
Christian Lubich, A variational splitting integrator for quantum molecular dynamics, Applied Numerical Mathematics, v.48 n.3-4, p.355-368, March 2004
Marlis Hochbruck , Alexander Ostermann, Exponential Runge-Kutta methods for parabolic problems, Applied Numerical Mathematics, v.53 n.2, p.323-339, May 2005
S. Krogstad, Generalized integrating factor methods for stiff PDEs, Journal of Computational Physics, v.203 n.1, p.72-88, 10 February 2005
F. Carbonell , J. C. Jimenez , R. Biscay, A numerical method for the computation of the Lyapunov exponents of nonlinear ordinary differential equations, Applied Mathematics and Computation, v.131 n.1, p.21-37, 10 September 2002
Serhiy Kosinov , Stephane Marchand-Maillet , Igor Kozintsev , Carole Dulong , Thierry Pun, Dual diffusion model of spreading activation for content-based image retrieval, Proceedings of the 8th ACM international workshop on Multimedia information retrieval, October 26-27, 2006, Santa Barbara, California, USA
Philip W. Livermore, An implementation of the exponential time differencing scheme to the magnetohydrodynamic equations in a spherical shell, Journal of Computational Physics, v.220 n.2, p.824-838, January, 2007
M. Caliari , M. Vianello , L. Bergamaschi, Interpolating discrete advection-diffusion propagators at Leja sequences, Journal of Computational and Applied Mathematics, v.172 n.1, p.79-99, 1 November 2004
Paolo Novati, A polynomial method based on Fejr points for the computation of functions of unsymmetric matrices, Applied Numerical Mathematics, v.44 n.1-2, p.201-224, January
S. Koikari, An error analysis of the modified scaling and squaring method, Computers & Mathematics with Applications, v.53 n.8, p.1293-1305, April, 2007
Roger B. Sidje, Expokit: a software package for computing matrix exponentials, ACM Transactions on Mathematical Software (TOMS), v.24 n.1, p.130-156, March 1998
James V. Lambers, Practical Implementation of Krylov Subspace Spectral Methods, Journal of Scientific Computing, v.32 n.3, p.449-476, September 2007
Hvard Berland , Brd Skaflestad , Will M. Wright, EXPINT---A MATLAB package for exponential integrators, ACM Transactions on Mathematical Software (TOMS), v.33 n.1, p.4-es, March 2007
M. A. Botchev , D. Harutyunyan , J. J. W. van der Vegt, The Gautschi time stepping scheme for edge finite element discretizations of the Maxwell equations, Journal of Computational Physics, v.216 August 2006
Elena Celledoni , Arieh Iserles , Syvert P. Nrsett , Bojan Orel, Complexity theory for lie-group solvers, Journal of Complexity, v.18 n.1, p.242-286, March 2002
M. Tokman, Efficient integration of large stiff systems of ODEs with exponential propagation iterative (EPI) methods, Journal of Computational Physics, v.213 | matrix exponential function;matrix-free time integration methods;arnoldi method;superlinear convergence;lanczos method;conjugate gradient-type methods;krylov subspace methods |
270932 | Star Unfolding of a Polytope with Applications. | We introduce the notion of a star unfolding of the surface ${\cal P}$ of a three-dimensional convex polytope with n vertices, and use it to solve several problems related to shortest paths on ${\cal P}$.The first algorithm computes the edge sequences traversed by shortest paths on ${\cal P}$ in time $O(n^6 \beta (n) \log n)$, where $\beta (n)$ is an extremely slowly growing function. A much simpler $O(n^6)$ time algorithm that finds a small superset of all such edge sequences is also sketched.The second algorithm is an $O(n^{8}\log n)$ time procedure for computing the geodesic diameter of ${\cal P}$: the maximum possible separation of two points on ${\cal P}$ with the distance measured along ${\cal P}$. Finally, we describe an algorithm that preprocesses ${\cal P}$ into a data structure that can efficiently answer the queries of the following form: "Given two points, what is the length of the shortest path connecting them?" Given a parameter $1 \le m \le n^2$, it can preprocess ${\cal P}$ in time $O(n^6 m^{1+\delta})$, for any $\delta > 0$, into a data structure of size $O(n^6m^{1+\delta})$, so that a query can be answered in time $O((\sqrt{n}/m^{1/4}) \log n)$. If one query point always lies on an edge of ${\cal P}$, the algorithm can be improved to use $O(n^5 m^{1+\delta})$ preprocessing time and storage and guarantee $O((n/m)^{1/3} \log n)$ query time for any choice of $m$ between 1 and $n$. | Introduction
The problem of computing shortest paths in Euclidean space amidst polyhedral obstacles arises
in planning optimal collision-free paths for a given robot, and has been widely studied. In two
dimensions, the problem is easy to solve and a number of efficient algorithms have been developed,
see e.g. [SS86, Wel85, Mit93]. However, the problem becomes significantly harder in three dimen-
sions. Canny and Reif [CR87] have shown it to be NP-hard, and the fastest available algorithm
runs in singly-exponential time [RS89, Sha87]. This has motivated researchers to develop efficient
approximation algorithms [Pap85, Cla87] and to study interesting special cases [MMP87, Sha87].
An earlier version of this paper was presented at the Second Scandinavian Workshop on Algorithm Theory
[AAOS90]. Part of the work was carried out when the first two authors were at Courant Institute of Mathematical
Sciences, New York University and later at Dimacs (Center for Discrete Mathematics and Theoretical Computer Sci-
ence), a National Science Foundation Science and Technology Center - NSF-STC88-09648, and the fourth author
was at the Department of Computer Science, The Johns Hopkins University. The work of the first author is supported
by National Science Foundation Grant CCR-91-06514. The work of the second author was also partially supported by
an AT&T Bell Laboratories Ph.D. Scholarship and NSF Grant CCR-92-11541. The third author's work is supported
by NSF grants CCR-88-2194 and CCR-91-22169.
y Computer Science Department, Duke University, Durham, NC 27706 USA
z Computer Science Department, Polytechnic University, Brooklyn, NY 11201 USA
x Department of Computer Science, Smith College, Northampton, MA 01063 USA
- Rm. 3B-412, AT&T Bell Laboratories, 600 Mountain Ave., P.O. Box 636, Murray Hill, NJ 07974 USA
One of the most widely studied special case is computing shortest paths along the surface of a
convex polytope [SS86, MMP87, Mou90]; this problem was originally formulated by H. Dudeney
in 1903; see [Gar61, p. 36]. Sharir and Schorr presented an O(n 3 log n) algorithm for this problem,
which was subsequently improved by Mitchell et al. [MMP87] to O(n 2 log n), and then by Chen
and Han to O(n 2 ) [CH90].
In this paper we consider three problems involving shortest paths on the surface P of a convex
polytope in IR 3 . A shortest path on P is identified uniquely by its endpoints and the sequence
of edges that it encounters. Sharir [Sha87] proved that no more than O(n 7 ) distinct sequences of
edges are actually traversed by the shortest paths on P . This bound was subsequently improved
to \Theta(n 4 ) [Mou85, SO88]. Sharir also gave an O(n 8 log n) time algorithm to compute an O(n 7
superset of shortest-path edge sequences. However computing the exact set of shortest-path edge
sequences seems to be very difficult. Schevon and O'Rourke [SO89] presented an algorithm that
computes the exact set of all shortest-path edge sequences and also identifies, in logarithmic time,
the edge sequences traversed by all shortest paths connecting a given pair of query points lying on
edges of P . The sequences can be explicitly generated, if necessary, in time proportional to their
length. Their algorithm, however, requires O(n 9 log n) time and O(n 8
In this paper we propose two edge-sequence algorithms. The first is a very simple O(n 6 )
algorithm to compute a superset of shortest-path edge sequences, thus improving the result of
[Sha87]; it is described in Section 5. The second computes the exact set of shortest-path edge
sequences in O(n 6 fi(n) log n) time, where fi(:) is an extremely slowly-growing function, a significant
improvement over the previously mentioned O(n 9 log n) algorithm. The computation of the
collection of all shortest-path edge sequences on a polytope is an intermediate step of several algorithms
[Sha87, OS89], and is of interest in its own right.
The second problem studied in this paper is that of computing the geodesic diameter of P ,
i.e., the maximum distance along P between any two points on P . O'Rourke and Schevon [OS89]
gave an O(n 14 log n) time procedure for determining the geodesic diameter of P . In [AAOS90], we
presented a simpler and faster algorithm whose running time is O(n 10 ). An even faster O(n 8 log n)
algorithm is presented in the current version of the paper.
The third problem involves answering queries of the form: "Given x; y 2 P , determine the
distance between x and y along P ." Given a parameter n 2 - s - n 4 , we present a method for
preprocessing P , in O(n 4 s 1+ffi ) time, into a data structure of size O(n 4 s 1+ffi ) for any ffi ? 0, so that
a query can be answered in time O( n
s 1=4 log 2 n). If x is known to lie on an edge of the polytope,
the preprocessing and storage requirements are reduced to O(n 3 s 1+ffi ) and query time becomes
O( n
s 1=3
log 2 n) for
Our algorithms are based on a common geometric concept, the star unfolding. Intuitively, the
star unfolding of P with respect to a point x 2 P can be viewed as follows. Suppose there exists a
unique shortest path from x to every vertex of P . The object obtained after removing these n paths
from P is the star unfolding of P . Remarkably, the resulting set is isometric to a simple planar
polygon and the structure of shortest paths emanating from x on P corresponds to a certain Voronoi
diagram in the plane [AO92]. Together with relative stability of the combinatorial structure of the
unfolding as x moves within a small neighborhood on P , these properties facilitate the construction
of efficient algorithms for the above three problems.
preliminary version of this algorithm [SO88] erroneously claimed a time complexity of O(n 7 2 ff(n) log n); this
claim was corrected in [SO89]. Hwang et al. [HCT89] claimed to have a more efficient procedure for solving the same
problem, but their argument as stated is flawed, and no corrected version has appeared in the literature.
Chen and Han [CH90] have independently discovered the star unfolding and used it for computing
the shortest-path information from a single fixed point on the surface of a polytope. They
however take the unfoldability proven by Aronov and O'Rourke [AO92] for granted.
This paper is organized as follows. In Section 2, we formalize our terminology and list some
basic properties of shortest paths. Section 3 defines the star unfolding and establishes some of its
properties. Section 4 sketches an efficient algorithm to compute a superset of all possible shortest-path
edge sequences, and in Section 5 we present an algorithm for computing the exact set of
sequences; both algorithms are based on the star unfolding. In Section 6 we again use the notion
of star unfolding to obtain a faster algorithm for determining the geodesic diameter of a convex
polytope. Section 7 deals with shortest-path queries. Section 8 contains some concluding remarks
and open problems.
We begin by reviewing the geometry of shortest paths on convex polytopes.
Let P be the surface of a polytope with n vertices. We refer to vertices of P as corners;
the unqualified terms face and edge are reserved for faces and edges of P . We assume that P is
triangulated. This does not change the number of faces and edges of P by more than a multiplicative
constant, but simplifies the description of our algorithms.
2.1 Geodesics and Shortest Paths
A path - on P that cannot be shortened by a local change at any point in its relative interior is
referred to as a geodesic. Equivalently, a geodesic on the surface of a convex polytope is either
a subsegment of an edge or a path that (1) does not pass through corners, though may possibly
terminate at them, (2) is straight near any point in the interior of a face and (3) is transverse to
every edge it meets in such a fashion that it would appear straight if one were to "unfold" the
two faces incident on this edge until they lie in a common plane; see, for example, Sharir and
Schorr [SS86]. The behavior of a geodesic is thus fully determined by its starting point and initial
direction. In the following discussion we disregard the geodesics lying completely within a single
edge of P . Given the sequence of edges a geodesic traverses (i.e., meets) and its starting and ending
points, the geodesic itself can be obtained by laying the faces that it visits out in the plane, so that
adjacent faces share an edge and lie on opposite sides of it, and then connecting the (images of) the
two endpoints by a straight-line segment. In particular, the sequence of traversed edges together
with the endpoints completely determine the geodesic.
Trivially every shortest path along P is a geodesic and no shortest path meets a face or an edge
more than once. We call the length of a shortest path between two points the geodesic
distance between p and q, and denote it by d(p; q). The following additional properties of shortest
paths are crucial for our analysis.
Lemma 2.1 (Sharir and Schorr [SS86]) Let - 1 and - 2 be distinct shortest paths emanating
from x. Let y be a point distinct from x. Then either one of the paths is a subpath of the
other, or neither - 1 nor - 2 can be extended past y while remaining a shortest path. 2
Corollary 2.2 Two shortest paths cross at most once. 2
Lemma 2.3 If - are two distinct shortest paths connecting x; y 2 P, each of the two connected
components of P contains a corner.
Proof: First, Lemma 2.1 implies that removal of splits P into exactly two components. If
one of the two components of contained no corners, - 1 and - 2 would have to traverse
the same sequence of edges and faces. However, there exists at most one geodesic connecting a
given pair of points and traversing a given sequence of edges and faces. 2
2.2 Edge Sequences and Sequence Trees
A shortest-path edge sequence is the sequence of edges intersected by some shortest path - connecting
two points on P , in the order met by -. Such a sequence is maximal if it cannot be extended in
either direction while remaining a shortest-path edge sequence; it is half-maximal if no extension
is possible at one of the two ends. It has been shown by Schevon and O'Rourke [SO88] that the
maximum total number of half-maximal sequences is \Theta(n 3 ).
Observe that every shortest-path edge sequence oe is a prefix of some half-maximal sequence,
namely the one obtained by extending oe maximally at one end. Thus an exhaustive list of O(n 3 )
half-maximal sequences contains, in a sense, all the shortest-path edge-sequence information of P .
More formally, given an arbitrary collection of edge sequences emanating from a fixed edge e, let the
sequence tree \Sigma of this set be the tree with all distinct non-empty prefixes of the given sequences
as nodes, the trivial sequence consisting solely of e as the root, and such that oe is an ancestor
of oe 0 in the tree if and only if oe is prefix of oe 0 [HCT89]. The \Theta(n 3 ) bound on the number of
half-maximal sequences implies that the collection of O(n) sequence trees obtained by considering
all shortest-path edge sequences from each edge of P in turn, has a total of \Theta(n 3 ) leaves and \Theta(n 4 )
nodes in the worst case.
2.3 Ridge Trees and the Source Unfolding
The shortest paths emanating from a fixed source x 2 P cover the surface of P in a way that can
be naturally represented by "unfolding" the paths to a planar layout around x. This unfolding, the
"source unfolding," has been studied since the turn of the century. We will define it precisely in a
moment. A second way to organize the paths in the plane is the "star unfolding," to be defined in
Section 3. This is not quite as natural, and of more recent lineage. Our algorithms will be built
around the star unfolding, but some of the arguments do refer to the source unfolding as well.
Given two points x; y on P , y 2 P is a ridge point with respect to x if there is more than one
shortest path between x and y. Ridge points with respect to x form a ridge tree T x embedded
on P , 2 whose leaves are corners of P , and whose internal vertices have degree at least three and
correspond to points of P with three or more distinct shortest paths to x. In a degenerate situation
where x happens to lie on the ridge tree for some corner p, then p will not be a leaf of T x , but
rather lie internal to T x ; so in general not all corners will appear as leaves of T x . We define a
ridge as a maximal subset of T x consisting of points with exactly two distinct shortest paths to x,
and containing no corners of P . These are the "edges" of T x . Ridges are open geodesics [SS86]; a
stronger characterization of ridges is given in Lemma 2.4. Figs. 1 and 2 show two views of a ridge
tree on a pyramid.
2 For smooth surfaces (Riemannian manifolds), the ridge tree is known as the "cut locus" [Kob67].
A
x
Figure
1: Pyramid, front view: source x, shortest paths to five vertices (solid), ridge tree (dashed).
Coordinates of vertices are (\Sigma1; \Sigma1;
A
Figure
2: Pyramid, side view of Fig. 1. The ridges incident to p 2 and p 4 lie nearly on the edge
We will refer to a point y 2 P as a generic point if it is not a ridge point with respect to any
corner of P . The maximal connected portion of a face (resp. an edge) of P consisting entirely of
generic points will be called a ridge-free region (resp. an edgelet).
x
Figure
3: Ridge-free regions for Fig. 1. T p 1
are shown dashed (e.g., T p 3
is the 'X' on the
bottom face). The ridge-free region containing x is shaded darker.
If we cut P along the ridge tree T x and isometrically embed the resulting set in IR 2 , we obtain
the source unfolding of [OS89]. 3 In the source unfolding, the ridges lie on the boundary of the
unfolding, while x lies at its "center," which results in a star-shaped polygon [SS86]; see Fig. 4. Let
a peel be the closure of a connected component of the set obtained by removing from P both the
ridge tree T x and the shortest paths from x to all corners. A peel is isometric to a convex polygon
[SS86]. Each peel's boundary consists of x, the shortest paths to two corners of P , p and p 0 , and the
unique path in T x connecting p to p 0 . A peel can be thought of as the collection of all the shortest
paths emanating from x between xp and xp 0 . (The peel between xp 1 and xp 5 is shaded in Fig. 4.)
We need to strengthen the characterization of ridges from geodesics to shortest paths, in order
to exploit Corollary 2.2. This characterization seems to be new.
Lemma 2.4 Every ridge of the ridge tree T x , for any point x 2 P, is a shortest path.
Proof: An edge - of the ridge tree is a geodesic consisting of points that have two different shortest
paths to x [SS86]. Suppose - is not a shortest path. Then there must be two points a; b 2 - so
that the portion - 0 of - between a and b is a shortest path, but there is another shortest path, say
connecting them. Refer to Figure 5. By Lemma 2.1, - bg. Let ff 1 and ff 2 be the two
shortest paths from x to a, and fi 1 and fi 2 be the two shortest paths from x to b. Notice that - 0 ,
do not meet except at the endpoints, by Lemma 2.1. In particular, we can relabel
these paths so that the region bounded by (ff as illustrated in the figure, so
that ff 1 and fi 1 approach - 0 "from the same side." There are two cases to consider:
3 The same object is called U(P) in [SS86], "planar layout" in [Mou85], and "outward layout" in [CH90]. For
Riemannian manifolds, it is the "exponential map" [Kob67].
x
Figure
4: Source unfolding for the example in Figs. 1 and 2. Shortest paths to vertices (solid),
polytope edges (dashed), ridges (dotted). One peel is shaded.
x
a 1 a 2
a
Figure
5: Illustration of the proof of Lemma 2.4. Here x is the source, and - a geodesic ridge, with
shortest path. The region \Delta cannot contain any vertices of P .
Case 1: x 62 - 00 . Thus - 00 does not meet ff 1 or ff 2 except at a, by Lemma 2.1. Similarly, - 00 does
not meet fi 1 or fi 2 except at b. Thus, without loss of generality, we can assume that - 00 lies in
the portion \Delta of P bounded by - 0 , ff 1 , and fi 1 , and not containing ff 2 or fi 2 . Now, considering
the source unfolding from x, we observe that \Delta is (isometric to) a triangle contained within
a single peel. \Delta is the area swept by one class of shortest paths from x to y as y ranges over
In particular, \Delta contains no corners. On the other hand, paths are distinct
shortest paths connecting a to b, so by Lemma 2.3 each of the two sets obtained from P by
removing - 0 [- 00 has to contain a corner of P . However, one of these sets is entirely contained
in \Delta-a contradiction.
Case 2: x 2 - 00 . As - 00 and ff 1 can be viewed as emanating from a and having x in common, and
extends past x, Lemma 2.1 implies that ff 1 is a prefix of - 00 . Similarly, ff 2 is a prefix of
contradicting distinctness of ff 1 and ff 2 .Remark. Case 2 in the above proof is vacuous if x is a corner, which is the case in our applications
of this lemma.
As defined, T x is a tree with n vertices of degree less than 3 and thus has \Theta(n) vertices and
edges. However, the worst-case combinatorial size of T x jumps from \Theta(n) to \Theta(n 2 ) if one takes
into account the fact that a ridge is a shortest path comprised of as many as \Theta(n) line segments on
P in the worst case-and it is possible to exhibit a ridge tree for which the number of ridge-edge
incidences is
For simplicity we assume that ridges intersect each edge of P
transversely.
3 Star Unfolding
In this section we introduce the notion of the star unfolding of P and describe its geometric
and combinatorial properties. Working independently, both Chen and Han [CH90] [CH91] and
Rasch [Ras90] have used the same notion, and in fact the idea can be found in Aleksandrov's
work [Ale58, p.226] [AZ67, p.171].
3.1 Geometry of the Star Unfolding
be a generic point, so that there is a unique shortest path connecting x to each corner
of P . These paths are called cuts and are comprised of cut points (see Fig. 1). If P is cut open
along these cuts and embedded isometrically in the plane, then just as with the source unfolding,
the result is a non-self-overlapping simple polygonal region, a two-dimensional complex that we
call the star unfolding S x . That the star unfolding avoids overlap is by no means a straightforward
claim; it was first established in [AO92]:
Lemma 3.1 (Aronov and O'Rourke [AO92]) If viewed as a metric space with the natural definition
of interior metric, S x is isometric to a simple polygon in the plane (with the internal geodesic
metric).
The polygonal boundary @S x consists entirely of edges originating from cuts. The vertices of S x
derive from the corners of P and from the source x. An example is shown in Fig. 6. More complex
A
Figure
Construction of the star unfolding corresponding to Figs. 1 and 2. S x is shaded. The
superimposed dashed edges show the "natural" unfolding obtained by cutting along the four edges
incident to p 3 . The A, B, C, D, and E labels indicate portions of S x derived from those faces; the
relative neighborhood of each x i derives from A.
examples will be shown in Fig. 10.
The cuts partition the faces of P into subfaces, which map to what we call the plates of S x , each
a compact convex polygon with constant number of edges. See Fig. 7. We consider these plates
Figure
7: Plates corresponding to Fig. 6. The square base E is partitioned into two triangles.
to be the faces of the two-dimensional complex S x . We assume that the complex carries with it
labeling information consistent with P .
Somewhat abusing the notation, we will freely switch between viewing S x as a complex and
as a simple polygon embedded in the plane. In particular, a path - ae S x will be referred to as a
segment if it corresponds to a straight-line segment in the planar embedding of S x . Note that every
segment in S x is a shortest path in the intrinsic metric of the complex, but not every shortest path
in S x is a segment, as some shortest paths in S x might bend at corners.
For U(p) be the set of points in S x to which p maps; U is the "unfolding" map (with
respect to x). U(p) is a single point whenever p is not a cut point. is
a set of n distinct points in S x . A non-corner point y 2 P distinct from x and lying on a cut has
exactly two unfolded images in S x . The corners of P map to single points. In particular, we have:
Lemma 3.2 (Sharir and Schorr [SS86]) For a point y 2 P, any shortest path - from x to y
maps to a segment - ae S x connecting an element of U(y) to an element of U(x). 2
There is a view of S x that relates it to the source unfolding: the star unfolding is just an "inside-
out" version of the source unfolding, in the following sense. The star unfolding can be obtained by
stitching peels together along ridges; see Fig. 8. The source unfolding is obtained by gluing them
along the cuts. Compare with Fig. 4. peel was defined in Section 2.3 as a subset of P , but
slightly abusing the terminology we also use this term to refer to the corresponding set of points
in the source or star unfolding.)
We next define the pasting tree \Pi x as the graph whose nodes are the plates of S x , with two
nodes connected by an arc if the corresponding plates share an edge in S x . See Fig. 9. For a generic
Figure
8: Ridge tree T x ae S x .
point x, \Pi x is a tree with O(n 2 ) nodes, as it is the dual of a convex partition of a simple polygon
without Steiner points. (If x were a ridge point of some corner, S x would not be connected and
\Pi x would be a forest.) \Pi x has only n leaves, corresponding to the triangular plates incident to
the n images of x in S x or, equivalently, to x in P x . By Lemma 3.2, any shortest path from x to
corresponds to a simple path in \Pi x , originating at one of the leaves. Thus, the O(n 3 ) edge
sequences corresponding to the simple paths that originate from leaves of \Pi x include all shortest
path edge sequences emanating from x. In fact, there are O(n 2 ) maximal edge sequences in this
set, one for each pair of leaves.
In the following sections we will need the concept of the "kernel" of a star unfolding. Number
the corners in the order in which cuts emanate from x. Number the n source images
(elements of so that @S is the cycle x comprised of 2n segments (see
Fig. 6). The kernel is a subset of S x , but to define it it would be most convenient to view S x as
embedded in the plane as a simple polygon. Consider the polygonal cycle We claim
that it is the boundary of a simple polygon fully contained in (the embedding of) S x . Indeed,
each line segment
4 is fully contained in the peel sandwiched between x
Thus the line segments p i p i+1 are segments in S x in the sense defined above and indeed form a
simple cycle. The simple n-gon bounded by this cycle is referred to as the kernel K x of the star
unfolding S x . An equivalent way of defining K x is by removing from S x all triangles 4p
Fig. 10 illustrates the star unfolding, and its kernel for several randomly generated
polytopes. 5 Note that neither set is necessarily star-shaped.
We extend the definition of the map U to sets in the natural way by putting
q2Q U(q).
The main property of the kernel that we will need later is:
4 Here are thereafter
5 The unfoldings were produced with code written by Julie DiBiase and Stacia Wyman of Smith College.
Figure
9: Pasting tree \Pi x for Fig. 7: one node per plate.
Lemma 3.3 The image of the ridge tree is completely contained within the kernel, which is itself
a subset of the star unfolding: U(T x
Proof: Since K x can be defined by subtraction from S x , K x ae S x is immediate. The ridge tree
T x can be thought of as the union of the peel boundaries that do not come from cuts. These
boundaries remained when we removed triangles 4p x to form K x (see Fig. 8). 2
Recently Aronov and O'Rourke [AO92] proved that
Theorem 3.4 (Aronov and O'Rourke [AO92]) U(T x ) is exactly the restriction of the planar
Voronoi diagram of the set source images to within K x or, equivalently, to within S x .3.2 Structure of the Star Unfolding
We now describe the combinatorial structure of S x . A vertex of S x is an image of x, of a corner of
P , or of an intersection of an edge of P with a cut. An edge of S x is a maximal portion of an image
of a cut or an edge of P delimited by vertices of S x . It is easy to see that S x consists of \Theta(n 2 )
plates in the worst case, even though its boundary is formed by only 2n segments, the images of
the cuts. We define the combinatorial structure of S x as the 1-skeleton of S x , i.e., the graph whose
nodes and arcs are the vertices and edges of S x , labeled in correspondence with the labels of S x ,
which are in turn derived from labels on P . The combinatorial structure of a star unfolding has
the following crucial property:
Lemma 3.5 Let x and y be two non-corner points lying in the same ridge-free region or on the
same edgelet. Then S x and S y have the same combinatorial structure.
Figure
10: Four star unfoldings: vertices, left to right, top to bottom. The kernel
is shaded darker in each figure.
Proof: Let f be the face containing xy in its interior. The case when xy is part of an edge is
similar.
As the shortest paths from any point z 2 xy to the corners are pairwise disjoint except at z
(cf. Lemma 2.1) and z is confined to f , the combinatorial structure of S z is uniquely determined
by (1) the circular order of the cuts around z, and (2) the sequence of edges and faces of P met by
each of the cuts. We will show that (1) and (2) are invariants of S z as long as z does not cross a
ridge or an edge of P . First, the set of points of f , for which some shortest path to a fixed corner
traverses a fixed edge sequence, is convex-it is simply the intersection of f with the appropriate
peel of the source unfolding with respect to p-implying invariance of (2).
Now suppose the circular order of the cuts around z is not the same for all z 2 xy. The initial
portions of the cuts, as they emanate from any z, cannot coincide, as distinct cuts are disjoint
except at z. Hence there can be a change in this circular order only if one of the vectors pointing
along the initial portion of the cuts changes discontinuously at some intermediate point z
However, this can only happen if z 0 is a ridge point, a contradiction. 2
This lemma holds under more general conditions. Namely, instead of requiring that xy be free
of ridge points, it is sufficient to assume that the number of distinct shortest paths connecting z to
any corner does not change as z varies along xy.
Lemma 3.6 Under the assumptions of Lemma 3.5, K x is isometric to K y , i.e., they are congruent
simple polygons.
Proof: K x is determined by the order of corners on @K for each i, by the
choice of the shortest path p i p i+1 , if there are two or more such paths. The ordering is fixed, once
combinatorial structure of S x is determined. The choice of the shortest path connecting p i to p i+1
is determined by the constraint that 4p
Let R be a ridge-free region. By the above lemma, S x can be embedded in the plane in such a
way that the images of the corners of P are fixed for all x 2 R, and the images of x in S x move as
x varies in R ' P . This guarantees that K This is illustrated in Fig. 11. In
what follows, we are going to assume such an embedding of S x , and use KR to denote K x for all
points x 2 R. Similarly, define K " for an edgelet ".
3.3 How Many Different Unfoldings Are There?
For the algorithms described in this paper, it will be important to bound the number of different
possible combinatorial structures of star unfoldings, as we vary the position of source point x, and
to efficiently compute these unfoldings (more precisely, compute their combinatorial structure plus
some metric description, parametrized by the exact position of the source), as the source moves on
the surface of the polytope. Two variants of this problem will be needed. In the first, we assume
that the source is placed on an edge of P , and in the second the source is placed anywhere on P .
In view of Lemma 3.5, it suffices to bound the number of edgelets and ridge-free regions.
Lemma 3.7 In the worst case, there are \Theta(n 3 ) edgelets and they can be computed in O(n 3 log n)
time.
Proof: Each edge can meet a ridge of the ridge tree of a corner at most once, since ridges are
shortest paths (recall that we assume that no ridge overlaps an edge-removal of this assumption
does not invalidate our argument, but only adds a number of technical complications). This gives
Figure
11: The star unfolding when the source x moves to y inside a ridge-free region. The unfolding
shown lightly shaded; S y is shown dotted. Their common kernel K y is the
central dark region.
an upper bound of n \Theta O(n) \Theta O(n) on the number of edge-ridge intersections and therefore on the
number of edgelets. An example
are present is relatively easy to construct
by modifying the lower bound construction of Mount [Mou90].
To compute the edgelets, we construct ridge trees from every corner in n \Theta O(n 2
applications of the algorithm of Chen and Han [CH90]. The edgelets are now computed by sorting
the intersections of ridges along each edge. 2
Lemma 3.8 In the worst case, there are \Theta(n 4 ) ridge-free regions on P. They can be computed in
Proof: The overlay of n ridge trees, one from each corner of P , produces a subdivision of P in
which every region is bounded by at least three edges. Thus, by Euler's formula, the number
of regions in this subdivision is proportional to the number of its vertices, which we proceed to
estimate.
By Lemma 2.4 ridges are shortest paths and therefore two of them intersect in at most two
points (cf. Corollary 2.2) or overlap. In the latter case no new vertex of the subdivision is created,
so we restrict our attention to the former. In particular, as there are n \Theta ridges, the
total number of their intersection points is O(n 4 ). Refining this partition further by adding the
edges of P does not affect the asymptotic complexity of the partition, as ridges intersect edges in
O(n 3 ) points altogether. This establishes the upper bound.
It is easily checked that in Mount's example of a polytope
there
regions. Hence there are \Theta(n 4 ) ridge-free regions on P
in the worst case.
The ridge-free regions can be computed by calculating the ridge tree for every corner, and
overlaying the trees in each face of P . The first step takes O(n 3 ) time, while the second step
can be accomplished in time O((r r is the number of ridge-free
regions in P , using the line-sweep algorithm of Bentley and Ottmann [BO79]. If computing the
ridge-free regions is a bottleneck, the last step can be improved to O(n 4 ) by using a significantly
more complicated algorithm of Chazelle and Edelsbrunner [CE92]. 2
3.4 How Many Different Ridge Trees Are There?
In Section 3.1, we proved that the combinatorial structure of S x is the same for all points x in a
ridge-free region. As x moves in a ridge-free region, the ridge tree T x changes continuously, as a
subset of P . In this subsection, we prove an upper bound on the number of different combinatorial
structures of T x as the source point x varies over a ridge-free region or an edgelet. Apart from being
interesting in their own right, we need these results in the algorithms described in Sections 5-7.
Let R be a ridge-free region, and let x be a point in R. By Theorem 3.4, T x is the Voronoi
diagram V x of clipped to lie within KR . Since ridge vertices do not lie on @S x , all
changes in T x , as x varies in R, can be attributed to changes in V x . Thus it suffices to count the
number of different combinatorial structures of Voronoi diagrams V x , x 2 R.
g. For each x
are the coordinates of a generic point y in the plane. Let
the lower envelope of the f i 's. Then V x is the same as (the 1-skeleton of) the orthogonal projection
of the graph of f(y) onto the y 1 y 2 -plane.
We introduce an orthogonal coordinate system in R and let x have coordinates (s; t) in this
system. Then positions of x i are linear functions of s; t of the form
x i2
' a i
where (a are coordinates of x i when x is at the origin of the (s; t) coordinate system in R, and
defines the orientation of the ith image of R in the plane.
We now regard f and f i 's as 4-variate functions of s; t; y by MR the projection
of the graph of f onto the (s; t; y It can be shown that total number of different
combinatorial structures of V x is bounded by the number of faces in MR . Let
Using (1) we obtain
where, for each i, the C i 's are constants that depend solely on a
denote the lower envelope of the g i 's. Since g(s; t; y
the projection of the graph of g is the same as MR . Let
and set
Then every face of the graph of g is the intersection of the lower envelope of -
's with the surface
defined by the equations (3). Since each - g i is a 8-variate linear function, by the Upper Bound
Theorem for convex polyhedra, the graph of their lower envelope has O(n 4 ) faces. Hence, the
number of faces in MR is also O(n 4 ). Using the algorithm of Chazelle [C91], all the faces of this
lower envelope and thus also those of MR can be computed in O(n 4 ) time. Putting everything
together, we conclude
Lemma 3.9 The total number of different combinatorial structures of ridge trees for source points
lying in a ridge-free region R is O(n 4 ). Moreover, the faces of MR can be computed in time O(n 4 ).
Remark. The only reason for assuming in the above analysis that x stays away from the boundary
of R was to ensure that the vertices of the Voronoi diagram avoid the boundary of KR . However,
it is easy to verify that when x is allowed to vary over the closure of R, Voronoi vertices never cross
the boundary of KR , but may touch it in limiting configurations. Thus the same analysis applies
in that case as well.
An immediate corollary of the above lemma and Lemma 3.8 is
Corollary 3.10 The total number of different combinatorial structures of ridge trees for a convex
polytope in IR 3 with n vertices is O(n 8 ).
If the source point moves along an edgelet " rather in a ridge-free region, we can obtain a
bound on the number of different combinatorial structures of ridge trees by setting
Proceeding in the same way as above, each g i now becomes
We now define the subdivision M " as the projection of the graph of the lower envelope of g i 's. Let
i is now a 5-variate linear function, by the Upper Bound Theorem, the number of faces in
" is O(n 3 ), which implies that there are O(n 3 ) distinct combinatorial structures of ridge trees as
x varies along ". Moreover, M " can be computed in time O(n 3 ) [C91]. Hence, we obtain
Lemma 3.11 The total number of distinct ridge trees as the source point moves in an edgelet is
O(n 3 ), and the subdivision M " can be computed in O(n 3 ) time.
Remarks. (i) None of Lemma 3.9, Corollary 3.10, or Lemma 3.11 are known to be tight.
(ii) In the above analysis for MR (resp. M " ) the only portion of the structure that is relevant
for our algorithms is that which corresponds to points with (s;
We will have to "filter out" irrelevant features at a later stage in the
computation.
4 Edge Sequences Superset
In this section we describe an O(n 6 ) algorithm for constructing a superset of the shortest-path
edge sequences, which is both more efficient and conceptually simpler than previously suggested
procedures, and which produces a smaller set of sequences.
Observe that all shortest-path edge sequences are realized by pairs of points lying on edges
of P-any other shortest path can be contracted without affecting its edge sequence so that its
endpoints lie on edges of P . Let x be a generic point lying on an edgelet ". As mentioned in
Section 3.1, the pasting tree \Pi x contains all shortest path edge sequences that emanate from x.
Moreover by Lemma 3.5, \Pi x is independent of choice of x in ". Therefore, the set of O(n 3 ) pasting
trees is an edgeletg, each of size O(n 2 ), contains an implicit representation of a
set of O(n 6 ) sequences (O(n 5 ) of which are maximal in this set), which includes all shortest-path
edge sequences that emanate from generic points.
Algorithm 1: Sequence Trees
for each edge e of P do
for each edgelet endpoint v 2 e do
Compute shortest-path edge sequences \Sigma v emanating from v.
for each edgelet " ae e do
Choose a point x 2 ".
Compute
for each maximal sequence oe 2 \Pi x do
for each sequence oe 2 \Sigma e do
Traverse oe, augmenting T e .
Stop if oe visits the same edge twice.
T e is the sequence tree containing shortest path edge sequences
emanating from e.
Hence, we can compute a superset of shortest path edge sequences in three steps: First, partition
each edge of P into O(n 3 ) edgelets in time O(n 3 ) as described in Lemma 3.7. Second, compute
shortest path edge sequences from the endpoints of each edgelet, using Chen and Han's shortest path
algorithm. Next, compute the star unfolding from a point in each edgelet, again using the shortest
path algorithm. The total time spent in the last two steps is O(n 5 ). Finally, this representation of
edge sequences is transformed into O(n) sequence trees, one for each edge (cf. Section 2.2) in time
O(n 6 ) in a straightforward manner; see Algorithm 1 for the pseudocode. We thus obtain
Theorem 4.1 Given a convex polytope in IR 3 with n vertices, one can construct, in time O(n 6 ),
O(n) sequence trees that store a set of O(n 6 ) edge sequences, which include all shortest path edge
sequences of P. 2
Remark. (i) Note that our algorithm uses nothing more complex than the algorithm of Chen
and Han for computing shortest paths from a fixed point, plus some sorting and tree traversals. It
achieves an improvement over previous algorithms mainly by reorganizing the computation around
the star unfolding.
(ii) The sequence-tree representation for just the shortest-path edge sequences is smaller by a
factor of n 2 than our estimate on the size of the set produced by Algorithm 1 (cf. Section 2.2), but
computing it efficiently seems difficult. In addition, it is not clear how far the actual output of our
algorithm is from the set of all shortest-path edge sequences. We have a sketch of a construction for
a class of polytopes that force our algorithm to
sequences.
5 Exact Set of Shortest-Path Edge Sequences
In this section, we present an O(n 3 fi(n) log n) algorithm, for computing the exact set of maximal
shortest-path edge sequences emanating from an edgelet. Here fi(\Delta) is an extremely slowly growing
asymptotically smaller than log n. Running this algorithm for all edgelets of P , the exact
set of maximal shortest-path edge sequences can be computed in time O(n 6 fi(n) log n), which is a
significant improvement over Schevon and O'Rourke's O(n 9 log n) algorithm [SO89].
" be an edgelet. We are interested in computing maximal shortest-path edge sequences
(corresponding to paths) emanating from x, for all x 2 ". For each fixed x, the shortest paths
originating from x can be subdivided according to their initial direction. If the path leaves x
between xp i\Gamma1 and xp i , it corresponds to a segment in S x emanating from x i , the ith image of x.
The area swept by all such segments, for a fixed x and i, is exactly the ith peel. Let us concentrate
on the portion P x;i of the ith peel that lies in K One measure of how far the 'influence' of
extends into K " , as x moves along ", is the union C
x2" P x;i . C i is the union of a family of
convex sets P x;i sharing p as a boundary edge, therefore C i is star-shaped with respect to any
point z 2 p It is easily checked that the restriction of a plate of S x to
not vary with x 2 ". It therefore makes sense to talk about the nodes of \Pi x visited
by fl i . We say that visits a node v of \Pi x if fl i intersects the plate corresponding to v.
Observe that every maximal (over all x in ") edge sequence corresponding to a shortest path
emanating from x between xp i\Gamma1 and xp i is realized by some shortest path that connects x i to some
point y on in fact to a ridge vertex of P x;i . This sequence is determined solely by the plate
of \Pi x that contains y. Hence, it is sufficient to determine the furthest that fl i gets from x i in \Pi x .
More formally, consider the minimal subtree \Pi x;i of \Pi x rooted at (the plate incident to) x i and
containing all nodes of \Pi x visited by fl i . The paths from the root of \Pi x;i to its leaves correspond to
desired maximal sequences. Repeating this procedure for all images x i , we can collect all maximal
sequences corresponding to shortest paths from points on ".
The above idea can be made algorithmic, but transforming it directly to an algorithm requires
computing the sets C i (i.e., taking the union of a continuous family of convex sets P x;i , each
obtained by deleting 4p from the ith peel), which is rather intricate. We therefore use a
shortcut, replacing fl i by an easier-to-compute curve fl 0
. Since the desired maximal sequences are
necessarily realized by shortest paths from x i to one of the ridge vertices lying on @P x;i , we only
need to consider the portion of fl i that contains a vertex of T x , for some x 2 ". We now show how
to compute a curve fl 0
i that contains this portion of fl i .
A generic ridge vertex v is incident to three open peels c has degree more than
four and exists at more than just a discrete set of positions of x 2 ", we arbitrarily pick a triple
of open peels to define it. As x moves along ", the vertex traces an algebraic curve
in K x . Let a lifetime of a ridge vertex v, defined by c , be a maximal connected interval
implies that v is a vertex of T x . Let \Gamma i be the set of curves traced out
by ridge vertices that appear on the boundary of P x;i during their lifetimes; set n j. It can
be verified that the arcs in \Gamma i are the projections of those edges of the subdivision M " , defined in
Section 3.4, at which g i appears on the lower envelope of g i 's. (As we mentioned in Section 3.4,
may contain "irrelevant" features. In particular, we must truncate each aforementioned arc
so that it corresponds to positions of the source on ". Secondly, we must verify that the Voronoi
diagram vertex corresponding to the arc indeed yields a ridge vertex. It is sufficient to check, for
a single point of the curve traced out by the vertex as x ranges over ", that it lies inside K " , as
a ridge vertex cannot leave K " . This is easily accomplished by one point-location query per arc.)
Therefore, by Lemma 3.11,
summation is taken over all peels of S x , and all
can be computed in time O(n 3 ).
Let z be a point on p . If we introduce polar coordinates with z as the origin, each arc
can be regarded as a univariate function constant number of '-
monotone arcs if is not '-monotone; this is possible since g i is a portion of an algebraic curve of
small degree). Let fl 00
i be the graph of the upper envelope of arcs in \Gamma i . Since fl i is a portion of
the boundary of a set star-shaped with respect to z, the portion of fl i that contains a vertex of
the ridge tree T x , for some x 2 ", lies on fl 00
i . However, fl 00
i is not necessarily a connected arc.
Suppose are the endpoints of the connected components of fl 0
counterclockwise direction around z; we connect u 2i to u 2i+1 by a segment for each m. The
resulting curve is the desired curve fl 0
i is a piecewise algebraic '-monotone arc. Since each arc
i , has O(n i fi(n i
is a constant depending on the maximum degree of arcs in \Gamma i , and ff(\Delta) is the inverse Ackermann
function. Using a divide-and-conquer approach fl 00
i can be computed in time O(n
[HS86].
We now trace fl 0
lies in the portion K " [z] of K " visible from z.
Compute this portion, in linear time [EA81], and subdivide K " [z] into a linear number of triangles
all incident to z. Now partition fl 0
into connected portions each fully contained in one
of these triangles. This can be done in time proportional to the number of arcs constituting fl 0
and the number of triangles involved and produces at most O(n) extra arcs, as
i is '-monotone
and thus crosses each segment separating consecutive triangles in at most one point. Since each
triangle 4 is fully contained in K " and thus encloses no images of a vertex of P , the set of plates
of S x met by 4 corresponds to a subtree \Pi 4 of \Pi x of linear size, with at most one vertex of degree
3 and all remaining vertices of degree at most 2. Hence, \Pi 4 can be covered by two simple paths
and they can be computed in linear time. For each - j ae 4, we determine the
furthest node that - i reaches in \Pi 1
4 by binary search. An intersection between - j and a plate
of K " can be detected in O(1) time, so the binary search requires only O(log n) time. The total
time spent is thus O(n i fi(n) log n triangles of K " [z], and O(n 3 fi(n) log n) over all
The above processing is repeated for each of the O(n 3 ) edgelets ". This completes the description
of the algorithm. It is summarized in Algorithm 2.
Algorithm 2: Exact Edge Sequences
edgelets.
for each edgelet " do
of projections of edges in M " .
Eliminate irrelevant features from \Gamma.
for each image x i do
of \Gamma at which g i appears on the lower envelope.
Compute upper envelope fl 00
Convert
into a connected arc
Compute
for each 4 in the triangulation do
Compute
Compute
Compute
4 .
Find subtrees of \Pi 1
4 visited by fl 4 .
Theorem 5.1 The exact set of all shortest-path edge sequences on the surface of a 3-polytope on
n vertices can be computed in O(n 6 fi(n) log n) time, where n) is an extremely slowly
growing function.
6 Geodesic Diameter
In this section we present an O(n 8 log n) time algorithm to determine the geodesic diameter of P . As
mentioned in the introduction, this question was first investigated by O'Rourke and Schevon [OS89]
who presented an O(n 14 log n) time algorithm for computing it. Their algorithm relies on the
following observation:
Lemma 6.1 (O'Rourke and Schevon [OS89]) If a pair of points x; y 2 P realizes the diameter
of P, then either x or y is a corner of P, or there are at least five distinct shortest paths between
x and y. 2
Lemma 6.1 suggests the following strategy for locating all diametral pairs. We first dispose of
the possibility that either x or y is a corner in n \Theta O(n 2 just as in [OS89]. Next,
we fix a ridge-free region R and let MR be the subdivision defined in Section 3.4. We need to
compute all pairs of points x 2 cl (R) and y 2 K x such that there are at least five distinct shortest
paths between x and y, with . By a result of Schevon [Sch89], such a pair x; y can be a
diametral pair only if it is the only pair, in a sufficiently small neighborhood of x and y, with at
least five distinct shortest paths between them. Such a pair of points corresponds to a vertex of
MR . Hence, we use the following approach.
We first compute, in O(n 4 ) time, all ridge-free regions of P (cf. Lemma 3.8). Next, for each
ridge-free region R, we compute KR , vertices of MR , and f(v) for all vertices of MR (recall that
f(v) is the shortest distance from v to any source image; cf. Section 3.3). Next, for each vertex
lies in the closure of R and (y 1
the answer of both of these questions is 'yes', we add v to the list of candidates for diametral pairs.
(This step is exactly the elimination of "irrelevant features" mentioned at the end of Section 3.4.
Once the two conditions are verified, we know that f(v) is exactly d(x; y).)
Finally, among all diametral candidate pairs, we choose a pair that has the largest geodesic
distance. See Algorithm 3 for the pseudocode.
For each ridge-free region R, KR can be computed in time O(n 2 ) and preprocessed for planar
point location in additional O(n log n) time using the algorithm of Sarnak and Tarjan [ST86]. By
Lemma 3.9, vertices of MR and f(v), for all vertices of MR , can be computed in time O(n 4 ). We
spend O(log n) time for point location at each vertex of MR , so the total time spent is O(n 8 log n).
Algorithm 3: Geodesic Diameter
for each corner c of P do
Construct the ridge tree T c with respect to c.
for each vertex v of T c do
Add d(c; v) to the list of diameter candidates.
Compute the ridge-free regions.
for each ridge-free region R do
Compute MR and f(v) for all vertices v 2 MR .
Compute
Construct
Preprocess KR for point location queries.
Preprocess cl (R) for point location queries.
for each vertex of MR do
cl (R) and (y 1
to the list of diameter candidates.
Find a diametral candidate pair with the maximum geodesic distance.
Theorem 6.2 The geodesic diameter of a convex polytope in IR 3 with a total of n vertices can be
computed in time O(n 8 log n).
7 Shortest-Path Queries
In this section we discuss the preprocessing needed to support queries of the form: "Given x; y 2 P ,
determine d(x; y)". We assume that each point is given together with the face (or the edge) of P
containing it. Two variants of the problem are considered: (1) no assumption is made about x and
y and (2) x is assumed to lie on an edge of P .
Our data structure is based on the following observations. Let x; y be two query points. Suppose
is a generic point lying in a ridge-free region R and y is an image of y in S x .
If y 2 KR , then
On the other hand, if y 62 K x , then it lies in one of the triangles 4p
denotes the Euclidean length of a line segment in (the planar embedding) of 4p
We thus need a data structure that, given a point y, can determine whether
Let -R denote the preimage of @KR on P , so U(-R partitions each face f of P into
convex regions. By the nature of the way f is partitioned by -R , the regions in f can be linearly
ordered (i.e., their adjacency graph is a chain), so that determining the region of f containing a
given point y 2 f can be done in logarithmic time by binary search. Let \Delta ' f be such a region,
then either U (\Delta) ' KR or U (\Delta) " In fact, one can prove a more interesting property of
\Delta.
Lemma 7.1 Let R be a ridge-free region or an edgelet, let f be a face of P, and let \Delta be a connected
component of f n -R whose image is not contained in KR . Then the sequence of edges traversed by
the shortest path -(x; y) is independent of the choice of x 2 R and y 2 \Delta.
Proof: For the sake of a contradiction, suppose there are two points such that the
sequences of edges traversed by -(x; y 0 ) and -(x; y 00 ) are distinct. Then there must exist a point
with two shortest paths to x-to obtain such a point, move y from one end of y 0 y 00 to
the other and observe that the shortest path from x to y changes continuously and maintains the
set of edges of P that it meets, except at points y with more than one shortest path to x. Thus
. However, the segment y 0 y 00 ae \Delta as \Delta is convex, implying
Similarly, if x are such that the paths connecting these two points to y 2 \Delta traverse
different edge sequences, there must exist x 2 x 0 x 00 which is connected to y by two shortest paths,
again forcing y onto T x and yielding a contradiction. The lemma follows easily. 2
This lemma suggests the following approach to computing d(x; y) for U(y) not contained in
KR . For each connected component \Delta in f n -R , whose image is not contained in KR , one can
precompute the edge sequence for shortest paths from a point in R to a point in \Delta. This in
turn determines the transformation of coordinates from the f-based coordinates to the coordinates
usable in the planar unfolding of S x -thus we may now compute an image y of y and the peel
of S x that contains y under this unfolding map, which as noted above immediately yields d(x; y).
Hence, we can preprocess P in time O(n 2 log n) into a data structure of size O(n 2 ), so that one can
determine in O(log n) time whether U(y) 2 K x and, if U(y) 62 K x , then it also returns d(x; y) in
additional constant time.
Next assume that U(y) ae KR . Note that the data structure just described can be used to
compute the coordinates of U(y) even if U(y) ae KR . Now one has to compute
1-i-n
are the same as in (3). Let H be the set of hyperplanes in IR 9 corresponding to the
graphs of - g i 's, i.e.,
Then computing the value of g(s; t; y is the same as determining the first hyperplane of H
intersected by the vertical ray emanating from the point (s; t; y in the
positive v 5 -direction. Agarwal and Matou-sek [AM92] have described a data structure that, given a
set G of n hyperplanes in IR d and a parameter n - s - n bd=2c , can preprocess G, in time O(s 1+ffi ),
into a data structure of size O(s 1+ffi ), so that the first hyperplane of G intersected by a vertical ray
emanating from a point with x can be computed in time O( n
s 1=bd=2c
log 2 n). Since
in our case, we obtain a data structure of size O(s 1+ffi ) so that a query can be answered in time
O( n
s 1=4
log 2 n).
As described in Lemma 3.8, one can partition all faces of P into ridge-free regions in time O(n 4 ).
For each ridge-free region, we construct the above data structures. Finally, if x is not a generic
point then, as mentioned in the remark following Lemma 3.9, we can use the data structures of any
of the ridge-free regions whose boundaries contain x. It is easy to see by a continuity argument
that all shortest paths from such a point are encoded equally well in the data structures of all the
ridge-tree regions touching x.
In summary, for a pair of points x; y 2 P , d(x; y) is computed in the following three steps. We
assume that we are given the faces f x ; f y containing x; y, respectively. First find in O(log n) time
the ridge-free region R of f x whose closure contains x. Next find in O(log n) time the region \Delta of
f y n-R that contains y. If \Delta does not lie in KR , using the information stored at \Delta, compute d(x; y).
(The data structure required for handling this case has size \Theta(n 2 ) per ridge-free region in the worst
case. Hence choosing does not produce asymptotic space savings while reducing query
time.) Otherwise, compute d(x; y) in time O( n
s 1=4
log 2 n) using the vertical ray shooting structure.
Hence, we can conclude
Theorem 7.2 Given a polytope P in IR 3 with n vertices and a parameter n 2 - s - n 4 , one can
construct, in time O(n 4 s 1+ffi ) for any ffi ? 0, a data structure of size O(n 4 s 1+ffi ), so that d(x; y) for
any two points x; y 2 P can be computed in time O( n
s 1=4 log 2 n).
If x always lies on an edge, then H is a set of hyperplanes in IR 6 , so the query time of the
vertical ray shooting data structure is now O( n
s 1=3 log 2 n) for Moreover, we have to
construct only O(n 3 ) different data structures, one for each edgelet, so we can conclude
Theorem 7.3 Given a polytope P in IR 3 with n vertices and a parameter n 2 - s - n 3 , one can
construct in time, O(n 3 s 1+ffi ) for any ffi ? 0, a data structure of size O(n 3 s 1+ffi ), so that for any two
points x; y 2 P such that x lies on an edge of P one can compute d(x; y) in time O( n
s 1=3
log 2 n).
Remark. The performance can be slightly improved by employing the algorithm of Matou-sek and
Schwarzkopf [MS93].
8 Discussion and Open Problems
We have shown that use of the star unfolding of a polytope leads to substantial improvements
in the time complexity of three problems related to shortest paths on the surface of a convex
polytope: finding edge sequences, computing the geodesic diameter, and distance queries. Moreover,
the algorithms are not only theoretical improvements, but also, we believe, significant conceptual
simplifications. This demonstrates the utility of the star unfolding.
We conclude by mentioning some open problems
1. Can one obtain a better upper bound on the number of different combinatorial structures
of ridge trees by using a more global argument? Such an improvement will yield a similar
improvement in the time complexities of diameter and exact shortest path edge sequences
algorithms.
2. Can one answer answer a shortest path query faster if both x and y lie on some edge of
P? This special case is important for planning paths among convex polyhedra (see Sharir
[Sha87]).
--R
Star unfolding of a polytope with applications.
Konvexe Polyeder.
Ray shooting and parametric search.
Nonoverlap of the star unfolding.
Sharp upper and lower bounds on the length of general Davenport-Schinzel sequences
Intrinsic Geometry of Surfaces.
Algorithms for reporting and counting geometric intersections.
New lower bound techniques for robot motion planning problems.
An optimal convex hull algorithm for point sets in any fixed dimension.
An optimal algorithm for intersecting line segments in the plane.
Shortest paths on a polyhedron.
Storing shortest paths for a polyhedron.
Approximate algorithms for shortest path motion planning.
A linear algorithm for computing the visibility polygon from a point.
The 2nd Scientific AMerican Book of Mathematical Puzzles and Diversions
Nonlinearity of Davenport-Schinzel sequences and of generalized path compression schemes
Finding all shortest path edge sequences on a convex polyhedron.
On conjugate and cut loci.
On finding shortest paths on convex polyhedra.
Shortest paths among obstacles in the plane.
The discrete geodesic problem.
The number of shortest paths on the surface of a polyhedron.
Computing the geodesic diameter of a 3-polytope
An algorithm for shortest paths motion in three dimensions
Shortest paths along a convex polyhedron.
Shortest paths in Euclidean space with polyhedral obstacles
Algorithms for geodesics on polytopes.
A convex hull algorithm optimal for point sets in even dimensions.
On shortest paths amidst convex polyhedra.
The number of maximal edge sequences on a convex poly- tope
An algorithm for finding edge sequences on a polytope.
On shortest paths in polyhedral spaces.
Planar point location using persistent search trees.
Constructing the visibility graph for n line segments in O(n 2
--TR
--CTR
Sariel Har-Peled, Approximate shortest paths and geodesic diameters on convex polytopes in three dimensions, Proceedings of the thirteenth annual symposium on Computational geometry, p.359-365, June 04-06, 1997, Nice, France
Yi-Jen Chiang , Joseph S. B. Mitchell, Two-point Euclidean shortest path queries in the plane, Proceedings of the tenth annual ACM-SIAM symposium on Discrete algorithms, p.215-224, January 17-19, 1999, Baltimore, Maryland, United States
Marshall Bern , Erik D. Demaine , David Eppstein , Eric Kuo , Andrea Mantler , Jack Snoeyink, Ununfoldable polyhedra with convex faces, Computational Geometry: Theory and Applications, v.24 n.2, p.51-62, February
Mark Lanthier , Anil Maheshwari , Jrg-Rdiger Sack, Approximating weighted shortest paths on polyhedral surfaces, Proceedings of the thirteenth annual symposium on Computational geometry, p.485-486, June 04-06, 1997, Nice, France
Mark Lanthier , Anil Maheshwari , Jrg-Rdiger Sack, Approximating weighted shortest paths on polyhedral surfaces, Proceedings of the thirteenth annual symposium on Computational geometry, p.274-283, June 04-06, 1997, Nice, France
Demaine , Martin Demaine , Anna Lubiw , Joseph O'Rourke , Irena Pashchenko, Metamorphosis of the cube, Proceedings of the fifteenth annual symposium on Computational geometry, p.409-410, June 13-16, 1999, Miami Beach, Florida, United States | geodesics;shortest paths;star unfolding;convex polytopes |
270934 | Scalable Parallel Implementations of List Ranking on Fine-Grained Machines. | AbstractWe present analytical and experimental results for fine-grained list ranking algorithms. We compare the scalability of two representative algorithms on random lists, then address the question of how the locality properties of image edge lists can be used to improve the performance of this highly data-dependent operation. Starting with Wyllie's algorithm and Anderson and Miller's randomized algorithm as bases, we use the spatial locality of edge links to derive scalable algorithms designed to exploit the characteristics of image edges. Tested on actual and synthetic edge data, this approach achieves significant speedup on the MasPar MP-1 and MP-2, compared to the standard list ranking algorithms. The modified algorithms exhibit good scalability and are robust across a wide variety of image types. We also show that load balancing on fine grained machines performs well only for large problem to machine size ratios. | Introduction
List ranking is a fundamental operation in many algorithms for graph theory and computer vision
problems. Moreover, it is representative of a large class of fine grained data dependent algorithms.
Given a linked list of n cells, list ranking determines the distance of each cell from the head of
the list. On a sequential machine, this problem can be solved in O(n) time by simply traversing
the list once. However, it is much more difficult to perform list ranking on parallel machines
due to its irregular and data dependent communication patterns. The problem of list ranking of
random lists has been studied extensively on PRAM models and several clever techniques have
been developed to implement these algorithms on existing parallel machines [2, 3, 10, 11]. In this
paper, we study the scalability of these techniques on fine-grained machines. We then present
efficient algorithms to perform list ranking on edge pixels in images. Performance results based
on implementations on MasPar machines are discussed.
Most of the algorithms proposed in the literature are either simple but not work-efficient [2,
10, 11] or are work-efficient but employ complex data structures and have large constant factors
associated with them [1, 3, 11, 12]. In order to study the performance and scalability of these
algorithms in an actual application scenario and on existing parallel machines, we have chosen
two representative algorithms, Wyllie's algorithm and Anderson & Miller's randomized algorithm
[11]. We assume that list ranking is an intermediate step in a parallel task. Therefore, in the
general case, a linked list is likely to spread over the entire machine in a random fashion. We then
study the performance of these algorithms in applications such as computer vision and image
processing where locality of linked lists is present due to the neighborhood connectivity of edge
pixels. For this purpose, we present a modified approach that takes advantage of the locality and
connectivity properties. A similar technique has been described in [10] for arbitrary linked lists.
In Reid-Miller's work, the assignment of list cells to processors is determined by the algorithm,
whereas, for the arbitrary case, we assume pre-assigned random cell distribution. Our approach
was derived independently and was motivated by the contrast between the random list case and
the characteristics of edges in images.
We show that in the case of random lists where cells are randomly pre-assigned to processors,
the Randomized Algorithm runs faster than Wyllie's Algorithm on MasPar machines when lists
are relatively long. However, Wyllie's Algorithm performs better for short lists. This agrees
with the results reported on other machines [2, 10]. This also implies that in order to achieve
scalability, a poly-algorithmic approach is needed That is, for long lists use the Randomized
Algorithm and when the sizes of lists are reduced to a certain length, use Wyllie's Algorithm.
This approach has also been used to design theoretically processor-time optimal solutions. We
show, however, that for list ranking of edge pixels of images on fine-grained SIMD machines,
the standard Wyllie's Algorithm and Randomized Algorithm do not take good advantage of
the image edge characteristics. A modified technique described in Section 4 runs about two
to ten times faster (depending on the image size) than the standard Wyllie's and Randomized
algorithms on a 16K processor MasPar MP-1. Moreover, whereas the standard algorithms do
not scale well on image edge lists, the modified algorithms exhibit good scalability with respect
to increases in both image size and number of processors. We study the performance of the
proposed algorithms on images with varying edge characteristics.
In the remainder of Section 1, we briefly describe the architecture of the MasPar machines,
and outline the notion of scalability used in our work. Section 2 presents an overview of parallel
algorithms for list ranking. The performance results of standard Wyllie's and Randomized algorithms
are discussed in Section 3 and their scalability behavior is analyzed. Efficient parallel
algorithms for performing list ranking on image edge lists are presented in Section 4. Section 5
contains implementation results and discusses scalability of the modified parallel list ranking
algorithms on image edge lists.
1.1 Fine-grained SIMD Machines
The parallel algorithms described in this paper are implemented on MasPar MP-1 and MP-2
fine-grained SIMD machines. Fine-grained machines, in general, are characterized by a large
number of processors with a fairly simple Arithmetic Logic Unit (ALU) in each processor. The
MasPar machines are massively parallel SIMD machines. A MasPar MP-series system consists of
a high performance Unix Workstation as a Front End and a Data Parallel Unit (DPU) consisting
of 1K to 16K processing elements (PEs), each with 16 Kbytes to 1-Mbyte of memory. All PEs
execute the instructions broadcast by an Array Control Unit (ACU) in lock step. PEs have
indirect addressing capability and can be selectively disabled. The clock rate for both MP-1 and
MP-2 machines is 12.5 MHz. Processors in the MP-2 employ a 32-bit ALU compared with a
4-bit ALU in MP-1 processors.
Each PE is connected to its eight neighbors via the Xnet for local communication. Besides
the Xnet, these machines have a global router network that provides direct point-to-point global
communication. The router is implemented by a three-stage circuit switched network. It provides
point-to-point communication between any two PEs with constant latency, regardless of the
distance between them. A third network, called the global or-tree, is used to move data from
individual processors to the ACU. This network can be used to perform global operations such
as global maximum, prefix sum, global OR, etc. across the data in the entire array.
For our experiments, we have used the MasPar MP-1 and MP-2 as being representative of fine-grained
machines. The use of the router network, which provides point-to-point communication
between any two processors with constant latency, ensures that our performance results are
independent of any specific network topology. Also, we have run our experiments on both
the MP-1 and MP-2 to show the effects of processing power of individual processors on the
performance of the algorithms. We have used an extended version of sequential ANSI C, the
MasPar Programming Language (MPL), to keep our implementations free of machine-dependent
software features.
1.2 Scalability of Parallel Algorithms
We analyze the scalability of our algorithms and implementations using several architecture
and algorithm parameters. We study the performance by varying machine and problem size,
by varying the characteristics of the input image, and by varying the processor speed. Several
notions of scalability exist [5]. In our analyses we define scalability as follows: Consider an
algorithm that runs in T (n; p) time on a p processor architecture and the input problem size
is n. The algorithm is considered scalable on the architecture if T (n; p) increases linearly with
an increase in the problem size, or decreases linearly with an increasing number of processors
(machine size) [7, 8]. In our experiments, we use this definition intutively to study the scalability
of the algorithms and implememntations.
It is likely that no single algorithm is scalable over the entire range of machine and problem
sizes. One of the important factors limiting the range of scalability is the sequential component
of the parallel algorithm. We identify the regions of scalability for different algorithms presented
in this paper and compare the analytical results with the experimental data.
Parallel Algorithms for List Ranking
In our implementations, we use Wyllie's parallel algorithm and Anderson & Miller's randomized
algorithm for list ranking [11]. In this section, we outline these algorithms for the general case of
random lists. Compared with other parallel algorithms for list ranking in the literature [11, 3, 12],
we have chosen these algorithms because of their simplicity, ease of implementation, and small
constant factors. In particular, the deterministic algorithms based on ruling sets and graph
coloring [3] are not easily amenable to implementation on existing parallel machines.
2.1 Wyllie's Pointer Jumping Algorithm
be a linked list of n cells such that
then c i
is the head of the list. The first element in the list which has no predecessor is referred
to as the tail of the list. The list ranking problem deals with finding the rank of each cell i with
respect to the head of the list.
Wyllie's Algorithm uses pointer jumping or dereferencing to find the rank of a cell. In pointer
jumping, the successor of a cell c i
is modified to be its successor's successor. That is, given
, one iteration of pointer jumping reassigns the successor of c i
to be c i+2
. Each
cell c i maintains a value rank[i] which is the distance of the cell c i from its current successor
succ[i]. Intuitively, after each round a linked list is divided into two linked lists of half the length
(see Fig. 1 a).
After log(n) iterations, all the cells point to nil and rank[i] contains the exact rank of cell
c i . On p processors, each processor is assigned n=p cells at random. Although this algorithm
is simple and has a running time of O( n log n
on a p-processor machine, it is not work-efficient
compared with the serial algorithm, which takes O(n) time. In the next section we describe a
work-efficient randomized algorithm.
2.2 Anderson and Miller's Randomized Algorithm
Anderson & Miller's Randomized algorithm is a modified version of the work-efficient algorithm
devised by Miller & Reif [9]. Assume that each processor holds n=p cells as a queue. The
algorithm consists of two phases.
In the first phase, called the pointer jumping phase, each processor "splices" out the cell at the
top of its queue with a condition that two consecutive cells of the list assigned to two different
processors are not spliced out simultaneously. In order to decide which cells to splice out, each
processor tosses a coin and assigns H or T to the cell at the top of its queue. Furthermore, all
the cells not at the top of the queue are assigned T. A processor splices out the cell at the top
of its queue only if that cell is marked H and its predecessor is assigned T (see Fig. 1 b). The
first phase ends when all of the cells in each processor are spliced out. The splicing out of cell c i
consists of two atomic assignments followed by updating of rank[i] as follows:
In the second phase, referred to as the reconstruction phase, the cells are put back into the
a b c d
a b c d
a b c d
(a) (b)
Figure
1: (a) Pointer jumping in Wyllie's algorithms. Edge labels show the rank of the edge's
predecessor cell in the current iteration; (b) Splicing condition in Anderson & Miller's algorithm.
's top cell will be spliced out.
queue in reverse of the order in which they were spliced out. In reconstructing the queue, the
rank of each cell c i
with respect to the head of the list is updated:
The expected running time of this algorithm is O(n=p), assuming n=p - log n. Additional
details of the algorithm and the analysis can be found in [11]. In the rest of this paper, we refer
to this algorithm as the Randomized Algorithm.
3 Implementation Results using Random Lists
The algorithms presented in the previous section have been implemented on the MasPar ma-
chines. In this section, we study the impact of various machine and problem parameters on the
performance of these algorithms on random lists. To generate a random linked list of size n, we
traverse an array of n cells in serial order, and for each cell we assign a pointer to a random
location in the array. It is ensured that a cell in the array is pointed to by no more than one other
cell. For the input to the Randomized Algorithm, we also assign reverse pointers in the target
location to generate a doubly linked list, as required by the algorithm. The resulting random
linked list is then read into the parallel processor array so that every processor holds n=p cells
of the array. Since the n=p cells in the processor may point to any of the n array locations, the
sublist in each processor does not generally contain successive cells in the linked list.
3.1 Scalability Analysis
Fig. 2 shows the performance of the Wyllie's and Randomized algorithms on a MasPar MP-1
using various machine and problem sizes. The experimental results presented in Fig. 2 (a) and (b)
are consistent with the theoretical analysis. The asymptotic complexity of Wyllie's Algorithm for
a random list of size n on p processors is O((n=p) log n), while the complexity of the Randomized
Algorithm is O(n=p). However, the Randomized Algorithm has larger constant factors due to
the increased overhead of coin-tossing and the record keeping involved in the reconstruction
stage of the algorithm. Wyllie's Algorithm is more suited for smaller linked lists due to its small
constant factors. The Randomized Algorithm outperforms Wyllie's Algorithm as the problem
size increases. This is because of the probabilistic splicing of the elements which generates lower
congestion in the communication network. For the example shown in Fig. 2 (a), the crossover
point occurs for a random list of size 94K elements on a 16K processor MP-1.
Random List Size (n)
Time
MP-1: 16,384 Processors
___ Wyllie
_._ Randomized
Number of Processors (p)
Time
Random List
___ Wyllie
_._ Randomized
(a) (b)
Figure
2: (a) Performance of Wyllie's and Randomized algorithms on the MP-1 for a random
list of varying size; and (b) performance of the algorithms on a random list of size 16K, varying
the number of processors of the MP-1
An interesting feature of Fig. 2 (b) is that the execution time of the Randomized Algorithm
starts increasing when the number of processors exceeds 4K for a random list of size 16K. This is
because, for a fixed problem size, as the number of processors increases beyond a certain point,
the size of the queue (n=p) in each processor becomes very small. Hence it is likely that the cell
at the top of the queue in a processor is pointed to by a cell at the top of the queue in another
processor, thus reducing the chance that the cell is spliced out in a particular iteration. This
increases the total number of iterations and thus increases the execution time of the algorithm.
By varying the problem size and machine size, we are able to determine the regions of problem
and machine size for which each algorithm is fastest. These experiments also demonstrate that
scalability is best achieved by changing the algorithm approach as the problem size increases.
This, in fact, agrees with the approaches used by algorithm designers to develop processor-time
optimal solutions for the list ranking problem [11].
We performed the experiments on the MasPar MP-2 as well. The MP-1 and MP-2 use the
same router communication network. The only difference between the architectures is the faster
processors on the MP-2. Thus, to study the impact of processor speed, we compare the performance
of the algorithms on 4096 processors of the MP-1 and MP-2. In both cases, for problem
sizes greater than 16K, the Randomized Algorithm outperforms Wyllie's Algorithm. However,
Wyllie's Algorithm is 35% faster and the Randomized Algorithm is 40% faster on the MP-2.
In computer vision, list ranking is an intermediate step in various edge-based matching and
represent them in a compact data structure for efficient processing in subsequent steps [2]. We
assume input in the form of binary images with each pixel marked as an edge or non-edge pixel.
Furthermore, as a result of an operation such as edge linking, each edge pixel points to the
successor edge pixel in its 8-connected neighborhood, where the successor function defines the
direction of the edge. An n \Theta n image is divided into p subimages of size n
\Theta n
, and each
processor is assigned one subimage. This straightforward division and distribution of edge pixels
to processors might cause load imbalance. Section 5.5 describes the effect of load imbalance,
and presents an alternative data-distribution scheme that improves the load balance across the
processors.
4.1 Image Edge Lists
Fig. 5 shows edge maps derived from various real and synthetic images. In particular, Fig. 5
(a) is an image of a picnic scene, and Fig. 5 (b) is the edge map obtained by performing edge
detection on the picnic image. This is followed by an edge linking operation that creates linked
lists of contiguous edge pixels. The image edge lists used in our experiments were derived by
performing sequential edge linking [4] operations on the edge maps of various images.
The edge lists resulting from images typically have several properties that may affect the
efficiency of the standard list ranking algorithms. For example, on the average, the number of
lists is fairly large, the lists are short in length, and these edge lists exhibit spatial continuity.
This implies that list cells assigned to a single processor are contiguous and they form sublists.
Furthermore, one processor may contain pieces of several edge lists. On one hand, these properties
can adversely affect the performance of the standard list ranking algorithms. For example, if we
run Wyllie's Algorithm disregarding the connectivity property, several edge points belonging to
the same sublist in a processor may compete for the communication links to access the successor
information from other processors. Similarly, in the Randomized Algorithm, a processor may be
tossing the coin for an edge point while its successor is already stored in the same processor. These
overheads make the standard Wyllie's or Randomized algorithms unattractive for computing
ranks for edge pixels in images. On the other hand, some of the image edge properties can be
used to modify the standard algorithms to achieve better performance on edge lists in images,
as described in the next section.
4.2 Modified Algorithms
Fig. 3 compares the performance of the Wyllie's and Randomized algorithms on random lists
versus their performance on edge maps of equivalent sizes obtained from the picnic image. This
is achieved by counting the number of edge pixels in the picnic image, and creating a random
list with the same number of elements. From Fig. 3, it is clear that the locality properties of the
image edge lists cause performance degradation, especially for large image sizes.
- Wyllie's on Picnic Image
___ Wyllie's on Random List
Image Size
Execution
Time
Wyllie's Algorithm
- Randomized on Picnic Image
___ Randomized on Random List
Image size
Execution
Time
Randomized Algorithm
(a) (b)
Figure
3: Comparison of the performance of the algorithms on random lists versus equivalent
sizes of the picnic image on a 16K processor MP-1.
In the following, we outline a modified approach that takes advantage of the connectivity
property inside a processor by reducing each sublist to a single edge, called a by-pass edge. The
approach then uses the Wyllie's or Randomized algorithm over the by-pass edges in the image.
In addition to eliminating redundant work within the processors, this approach also reduces the
amount of communication across processors. The approach consists of three steps:
Step 1. Convert each sublist of contiguous edge pixels in a subimage into a by-pass edge by
performing a serial list ranking operation on all sublists within a processor (for example,
see Fig. 4). Associate with each such edge the length of the sublist it is representing. For
lists that span processors, their corresponding by-pass edges are connected.
By-pass edge
Image contour
Processor i
Figure
4: By-pass edges.
Step 2. Run Wyllie's Algorithm or the Randomized Algorithm on the lists of by-pass edges.
Step 3. Serially update the rank of each edge pixel within a subimage using the final rank of
the by-pass edge that represents the pixel.
The modified algorithms can thus be thought of as a combination of serial and parallel list
ranking algorithms.
To analyze the scalability of the list ranking algorithms on image edge lists, we assume that
the image has n edge pixels uniformly distributed across p processors such that each processor
holds O(n=p) edge pixels. This simplifying assumption is unlikely to be strictly true for actual
images. However, since the number of edge pixels per processor is relatively low in fine-grained
implementations, the extent to which the number of edge pixels per processor deviates from
O(n=p) will be limited.
In the modified approach, the first step takes O(n=p) computation time to form the by-pass
edges. The last step takes O(n=p) time to update the rank of each edge pixel. Thus the total
time taken by the serial component of the modified algorithms is O(n=p).
To analyze the parallel execution time of the second step, we assume that the image has
multiple edge lists of varying lengths, and the length of the longest edge is l. Note that l is
a property of the underlying image from which the edge map was derived. Since the edges are
assumed to be uniformly distributed across the processors, the length of the longest list consisting
of by-pass edges is O(l=p). We further assume that the n=p pixels in each processor can be divided
into k sets such that each set contains successive pixels of an edge list assigned to the processor
(i.e. each processor contains k by-pass edges after the first step of the algorithm). Again, this
is a simplifying assumption. Under these assumptions, the execution time of the second step is
log l) for Wyllie's Algorithm, and O(k) expected time for the Randomized
Algorithm (assuming k - log(kp)). This is the time taken by the parallel component of the
modified algorithms.
Therefore, the total execution time is O(n=p+k log l) for the Modified Wyllie's Algorithm, and
O(n=p+ for the Modified Randomized Algorithm. Since the constant factors for the Modified
Randomized Algorithm are higher, it will outperform the Modified Wyllie's Algorithm only if l
is large (i.e. the image has very long edges).
The algorithms presented in this section are inherently fine-grained due to the high commu-
nication/computation ratio and irregular patterns of interprocessor communication. Efficient
algorithms for coarse-grained machines and their implementation are described in [6].
5 Implementation Results Using Image Edge Lists
We have used the edge maps from a number of real and synthetic images to study the performance
of list ranking algorithms. Fig. 5 shows the edgemaps used in our experiments. Fig. 5 (b), (c),
and (d) are derived by performing edge detection and edge linking operations on gray-scale
images. The edge characteristics of these images differ significantly. For example, the edges of
the written text image are more local compared to the other images. Typically, the edge density
(percent of edge pixels compared to the total number of pixels) of these real images is in the
range of 3 to 8 percent. As a contrast to these edge maps, we have also generated synthetic
edge maps of varying edge density and length. Fig. 5 (e) and (f) show synthetically generated
edge maps of straight lines and a spiral, respectively. We have generated these images with edge
densities ranging from 5 to 50 percent. The edge characteristics of the real and synthetic images
help in gaining insight into the performance of the algorithms on images of varying edge density
and edge length.
5.1 Comparison with Standard Algorithms
The performance of the modified algorithms is compared to the performance of the standard
algorithms in Fig. 6. Fig. 6 (a) shows the execution times for varying sizes of the picnic image on
a 16K processor MP-1. The modified algorithms are significantly faster than the straightforward
Wyllie's or Randomized algorithms. This is because these algorithms efficiently exploit the
locality of edges in images. We have verified the results on different machine sizes of the MP-1
and MP-2.
Fig. 6 (b) shows execution times of the algorithms for the dense synthetic spiral edge maps.
The spiral edge map is very different from the picnic edge map because it has much higher edge
density, and much longer image edges. Despite these vastly different image characteristics, the
modified algorithms are significantly faster than the standard algorithms. We have also tested
the algorithms on the different edge maps shown in Fig. 5 with similar results. Thus we conclude
that our modifications result in significant performance improvement over the standard Wyllie's
and Randomized algorithms for image edge lists. We point out that in Fig. 6, the relative
performance of the modified algorithms varies depending on the image characteristics. For the
spiral image, the Modified Randomized Algorithm is faster than Modified Wyllie's for images
of size greater than 350 \Theta 350 pixels. On the other hand, for the picnic image, the Modified
Randomized Algorithm is always slower than Modified Wyllie's. We explain this behavior in
detail in Section 5.3.
5.2 Scalability Analysis
The scalability of the modified algorithms has been studied using different images, and varying
the image size and number of processors. Results obtained using different edge maps shown in
Fig. 5 are similar. Hence we restrict our discussion to the performance of the modified algorithms
(a) (b)
(c) (d)
Figure
5: Picnic image and detected edge contours of different images used in our experiments:
(a) real picnic scene, (b) edgemap of the picnic scene (c) edgemap of written text, (d) edgemap
of street scene, (e) synthetic edgemap with straight lines, and (f) synthetic edgemap of a spiral.
. Randomized
_. Modified Randomized
___ Modified Wyllie
Image Size
Execution
Time
Picnic Image
6 . Randomized
_. Modified Randomized
___ Modified Wyllie
Image Size
Execution
Time
Dense Spiral Image
(a) (b)
Figure
Performance of the algorithms for various sizes of (a) the picnic image, and (b) the
spiral image; on 16,384 processors of the MP-1.
on the edge map derived from the picnic scene.
Fig. 7 examines the scalability of the Modified Wyllie and Modified Randomized algorithms
with respect to increasing problem size. The plot displays the overall execution time, as well as
the serial and parallel components of the modified algorithms. For small image sizes, the parallel
component dominates the overall execution time. This is because of the relatively large constant
factors of the parallel component compared to the sequential component. For large image sizes,
as the subimage size per processor grows, the execution time of the sequential component grows
much faster than the corresponding parallel component. This is consistent with the analytical
results since the execution time for the sequential list ranking component grows linearly with the
size of the subimage (O(n=p)), while the parallel component grows in proportion to the number of
by-pass edges in a subimage. In the case of the Modified Wyllie's Algorithm, the crossover point
between the execution times of sequential and parallel components occurs when the subimage
size assigned to each processor is 64 pixels. In the case of the Modified Randomized Algorithm,
the crossover point occurs when the subimage size is 128 pixels (see Fig. 8 (b)).
Fig. 8 shows the scalability behavior of the modified algorithms with respect to changes in the
machine size. We notice that the sequential component dominates the execution time for a high
problem/machine size ratio, and the parallel component dominates for a low problem/machine
size ratio. The results shown are for the MP-1. For the MP-2, the performance curves have
. Sequential
Parallel
___ Total
Image Size
Execution
Time
Modified Wyllie's
Parallel
___ Total
Image Size
Execution
Time
Modified Randomized
(a) (b)
Figure
7: Performance of the sequential and parallel components of (a) the Modified Wyllie, and
(b) Modified Randomized algorithms. (Execution time for the picnic image on the 16K processor
MP-1.)
similar shape. However, for the MP-2 the crossover point at which the parallel component
begins to dominate the execution time occurs for a larger problem/machine size ratio due to the
faster processors (higher computation/communication ratio) than the MP-1.
. Sequential
Parallel
___ Total
Number of Processors
Execution
Time
Modified Wyllie's
. Sequential
Parallel
___ Total
Number of Processors
Execution
Time
Modified Randomized
(a) (b)
Figure
8: Scalability with respect to number of processors for (a) the Modified Wyllie's Algo-
rithm, and (b) the Modified Randomized Algorithm. (Execution time for the picnic image of
size 512 \Theta 512 on the MP-1.)
Fig. 9 plots the performance of the modified algorithms when the number of processors increases
linearly with the image size. Thus the problem-size/machine-size ratio is constant, and
the size of the subimage in a processor is constant. We observe that the sequential time, which is
proportional to the size of the subimage in a processor, remains approximately the same, while
there is a small increase in the parallel component. This is due to the fact that in MasPar ma-0.10.2
_. Sequential
Parallel
___ Total
Image Size
Execution
Time
Modified Wyllie's
Parallel
___ Total
Image Size
Execution
Time
Modified Randomized
(a) (b)
Figure
9: Scalability for constant problem-size/machine-size ratio (256 elements per processor)
on the MP-1: (a) the Modified Wyllie's Algorithm, and (b) the Modified Randomized Algorithm.
chines, with the increase in number of processors, the number of links in the router network does
not change proportionally. However, since the overall increase in the execution time is small,
we conclude that the modified algorithms exhibit speedup proportional to the image size, if the
ratio of image-size to number of processors is constant.
5.3 Effect of Image Characteristics
We have studied the performance of our algorithms on images with varying characteristics in
terms of edge density and edge lengths. As discussed in Section 4.2, the total execution time is
O(n=p+k log l) for the Modified Wyllie's Algorithm, and O(n=p+k) for the Modified Randomized
Algorithm. Fig. 10 indicates that increase in execution time is proportional to the increase in image
edge density (O(n=p)), and this is true for both Modified Wyllie's and Modified Randomized
algorithms.
Fig. 11 shows the behavior of modified algorithms on edge maps with varying edge lengths.
In Fig. 11 (a), the Modified Randomized Algorithm is always slower than the Modified Wyllie's
Algorithm. This is also the case for the picnic scene image (see Fig. 6 (a)). This is due to the
large constant factors for the Modified Randomized Algorithm.
However, in Fig. 11 (b), the Modified Randomized Algorithm is significantly faster than the
Modified Wyllie's Algorithm beyond a certain image-size to machine-size ratio. This is because
. image
___ image
Image size
Execution
time
Modified Wyllie's
. image
___ image
Image size
Execution
time
Modified Randomized
(a) (b)
Figure
10: Performance on the synthetic line image of varying density on a 16K processor MP-1:
(a) the Modified Wyllie's Algorithm, and (b) the Modified Randomized Algorithm.
Modified Wyllie's
___ Modified Randomized
Image size
Execution
time
Line Image
Modified Wyllie's
___ Modified Randomized
Image size
Execution
time
Spiral Image
(a) (b)
Figure
11: Performance of the modified algorithms on synthetic images with the same density
but different edge lengths on a 16K processor MP-1: (a) line image, and (b) spiral image.
the parallel execution time of the Modified Randomized Algorithm is O(k), compared to O(k log l)
for the Modified Wyllie's Algorithm. Since the serial component for an equal-density line image
and spiral image is the same (O(n=p)), we expect the Modified Randomized Algorithm to run
faster when the length of the longest edge (l) is large as in the case of the spiral image.
5.4 MP-1 vs MP-2
The effect of the processor speed on the performance of the algorithms is studied by executing
the algorithms on 4K processor MasPar MP-1 and MP-2 machines. As shown in Fig. 12, Wyllie's
algorithm is faster on the MP-2 compared with MP-1. It is primarily due to increased processor
speed. Similar behavior is exhibited by the Randomized algorithm. It is worth noting that
although the MP-2 has lower execution times, the scalability behavior of the algorithms on the
two machines is very similar. The crossover point when the sequential component dominates the
overall execution time occurs for a larger image-size/machine-size ratio on the MP-2.
. Sequential
Parallel
___ Total
Image Size
Execution
Time
Modified Wyllie's on MP-1
. Sequential
Parallel
___ Total
Image Size
Execution
Time
Modified Wyllie's on MP-2
(a) (b)
Figure
12: Performance of the Modified Wyllie's Algorithm on varying sizes of the picnic image
on (a) 4K processors of the MP-1, and (b) 4K processors of the MP-2
5.5 Load Balancing
In an input derived from a real (as opposed to synthetic) image, it is very likely that edge contours
are concentrated in a particular portion of the image. In this case, simple partitioning of the image
into p subimages and assigning each subimage to a processor may yield an unbalanced load across
processors. In order to study the effect of load imbalance on performance, we have experimented
with various load-balancing techniques. In general, techniques based on first computing the load
variance across processors then redistributing the load to the processors with light loads have
failed to yield high performance. This is because the computation and data redistribution for
the load-balancing step become a significant part of the total execution time. Further, regardless
of the load redistribution, communication overhead remains the same.
In the following, we outline a simple heuristic to address the load balancing problem and
present performance results. The heuristic is based on dividing the input image into more than
subimages and assigning more than one subimage to each processor.
Partition the input image into kp identical sized subimages. Number the subimages in a row-
shuffled order as follows. Arrange the subimages into k sets such that each set contains a
grid of contiguous subimages. Number the subimages in each set in row-major order. From each
set to processor j, where example row-shuffled
ordering of an image on a 16 processor machine assuming shown in Figure 13.
Figure
13: A heuristic partitioning scheme of an input image on a 16 processor machine.
Figure
14 compares the distribution of edge pixels of the 1K \Theta 1K picnic image on a 16K
processor MP-1 using simple partitioning and using the partitioning based on the above described
heuristic. In the load-balanced partitioning scheme the number of processors having zero edge
pixels is reduced by half. At the same time, the variance of load (edge pixels per processor)
across the entire machine is reduced from 21.4 to 8.9.
Figures
15 and 16 compare the performance of the Modified Wyllie's Algorithm with and
without load-balancing, while varying the image and machine sizes. We observe that load-balancing
pays off only for very large image sizes. In the case of the simple partitioning scheme
used in the earlier sections, the sequential execution time dominates the parallel execution time
for large image sizes. In the load-balanced partitioning scheme, the sequential execution time
has been reduced at the expense of an increase in the parallel component. The increase in the
parallel component is due to the fact that more edge pixels in a processor now have successors
residing in other processors. This increases contention over the communication links during the
pointer jumping phase and thus increases the parallel time. The sequential time decreases only
for large ratios of N and p because the extent of load imbalance possible for small ratios will
always be low since the size of the subimage assigned to a processor is very small. This claim is
well supported by Figures 15 (b) and 16 (b).
Number
of
Processors
Number
of
Processors
(a) (b)
Figure
14: Histogram of edge-pixel distribution of the 1K \Theta 1K picnic image on 16K processors
of the MP-1: (a) the original distribution, and (b) the distribution after load-balancing.
In conclusion, a load-balancing scheme performs well for large image-size to machine-size ratios.
In terms of scalability with respect to machine size as well as problem size, the behavior using
the load balancing scheme is not much different than with the simple partitioning scheme.
Unbalanced
___ Load-Balanced
Image Size
Execution
Time
Execution Time
_. Parallel (unbalanced)
___ Parallel (load-balanced)
Image Size
Execution
Time
Sequential and Parallel Components
(a) (b)
Figure
15: Comparison of the execution time of the unbalanced and load-balanced Modified
Wyllie's Algorithm: (a) overall execution time, and (b) sequential and parallel components.
(Execution time for the picnic image on the 16K processor MP-1.)
1,024 4,096 16,3840.020.06_. Unbalanced
___ Load-Balanced
Number of Processors
Execution
Time
Execution Time
1,024 4,096 16,3840.010.030.05. Sequential (unbalanced)
_. Parallel (unbalanced)
___ Parallel (load-balanced)
Number of Processors
Execution
Time
Sequential and Parallel Components
(a) (b)
Figure
Comparison of the scalability with respect to number of processors for the unbalanced
and load-balanced Modified Wyllie's Algorithm: (a) overall execution time, and (b) sequential
and parallel components. (Execution time for the picnic image of size 512 \Theta 512 on the MP-1.)
6 Conclusions
In this paper, we have studied the scalability of list ranking algorithms on fine grained machines
and have presented efficient algorithms for list ranking of image edge lists. The Wyllie's and
Anderson & Miller's algorithms for list ranking are chosen as representative of deterministic and
randomized algorithms, respectively. The performance of these algorithms is studied on random
lists and image edge lists. It is shown that these algorithms perform poorly for image edge
lists. Also, no single algorithm covers the entire range of scalability. We show that a poly-
algorithmic approach, in which the algorithmic approach changes as the data size is reduced,
is required for scalability across all machine and problem sizes. For image edge lists, we have
presented modified algorithms that exploit the locality property of the edge lists. Performance of
our modified algorithms on actual images demonstrates the gains that can be achieved by using
applications characteristics in the algorithm design. On a 16K processor MasPar MP-1, the
modified algorithms run about two times faster on small images and about ten times faster on
large images than the standard Wyllie's and Randomized algorithms. The modified algorithms
are robust across a wide variety of images. Finally, the results of our extensive experimentation
have shown that while the standard algorithms were not scalable for list ranking on image edge
lists, the tailored algorithms exhibited good scalability with respect to increases both in image
size and number of processors, and also with respect to changes in image characteristics. We
have also shown that load balancing on fine grained machines does not pay off unless the problem
to machine size ratio is very large.
In summary, this study provides insight into the performance of fine-grained machines for
applications that employ light computations but have intense data dependent communications.
In contrast to our results for list ranking of edge lists on coarse-grained machines, the results
presented here demonstrate that implementations of communication intensive problems on fine-grained
machines are very sensitive to the characteristics of the input data and machine parameters
--R
"A simple parallel tree contraction algorithm,"
"Efficient parallel processing of image contours,"
"Faster optimal parallel prefix sums and list ranking,"
"Sequential edge linking,"
Measuring the scalability of parallel algorithms and architectures.
"Contour ranking on coarse grained machines: A case study for low-level vision computations,"
An Introduction to Parallel Algorithms
"Scalable data parallel algorithms and implementations for object recog- nition,"
"Parallel tree contraction and its applications,"
"List ranking and list scan on the Cray C-90,"
"List Ranking and Parallel Tree Contraction,"
"Efficient algorithms for list ranking and for solving graph problems on the hypercube,"
--TR
--CTR
Isabelle Gurin Lassous , Jens Gustedt, Portable list ranking: an experimental study, Journal of Experimental Algorithmics (JEA), 7, p.7, 2002 | parallel algorithms;fine-grained parallel processing;image processing;scalable algorithms;list ranking;computer vision |
270935 | A Spectral Technique for Coloring Random 3-Colorable Graphs. | Let G3n,p,3 be a random 3-colorable graph on a set of 3n vertices generated as follows. First, split the vertices arbitrarily into three equal color classes, and then choose every pair of vertices of distinct color classes, randomly and independently, to be edges with probability p. We describe a polynomial-time algorithm that finds a proper 3-coloring of G3n,p,3 with high probability, whenever p $\geq$ c/n, where c is a sufficiently large absolute constant. This settles a problem of Blum and Spencer, who asked if an algorithm can be designed that works almost surely for p $\geq$ polylog(n)/n [J. Algorithms, 19 (1995), pp. 204--234]. The algorithm can be extended to produce optimal k-colorings of random k-colorable graphs in a similar model as well as in various related models. Implementation results show that the algorithm performs very well in practice even for moderate values of c. | Introduction
A vertex coloring of a graph G is proper if no adjacent vertices receive the same color. The chromatic
number -(G) of G is the minimum number of colors in a proper vertex coloring of it. The problem
of determining or estimating this parameter has received a considerable amount of attention in
Combinatorics and in Theoretical Computer Science, as several scheduling problems are naturally
formulated as graph coloring problems. It is well known (see [13, 12]) that the problem of properly
coloring a graph of chromatic number k with k colors is NP-hard, even for any fixed k - 3, and it is
therefore unlikely that there are efficient algorithms for optimally coloring an arbitrary 3-chromatic
input graph.
On the other hand, various researchers noticed that random k-colorable graphs are usually easy
to color optimally. Polynomial time algorithms that optimally color random k-colorable graphs for
every fixed k with high probability, have been developed by Kucera [15], by Turner [18] and by
Dyer and Frieze [8], where the last paper provides an algorithm whose average running time over
all k-colorable graphs on n vertices is polynomial. Note, however, that most k-colorable graphs
are quite dense, and hence easy to color. In fact, in a typical k-colorable graph, the number of
common neighbors of any pair of vertices with the same color exceeds considerably that of any
pair of vertices of distinct colors, and hence a simple coloring algorithm based on this fact already
works with high probability. It is more difficult to color sparser random k-colorable graphs. A
A preliminary version of this paper appeared in the Proc. of the 26 th ACM STOC, ACM Press (1994), 346-355.
Institute for Advanced Study, Princeton, NJ 08540, USA and Department of Mathematics, Tel Aviv University,
Tel Aviv, Israel. Email: noga@math.tau.ac.il. Research supported in part by the Sloan Foundation, Grant No. 93-6-6
and by a USA-Israel BSF grant.
y ATT Bell Laboratories, Murray Hill, NJ 07974. Email: kahale@research.att.com. This work was done while the
author was at DIMACS.
precise model for generating sparse random k-colorable graphs is described in the next subsection,
where the sparsity is governed by a parameter p that specifies the edge probability. Petford and
Welsh [16] suggested a randomized heuristic for 3-coloring random 3-colorable graphs and supplied
experimental evidence that it works for most edge probabilities. Blum and Spencer [6] (see also
[3] for some related results) designed a polynomial algorithm and proved that it colors optimally,
with high probability, random 3-colorable graphs on n vertices with edge probability p provided
arbitrarily small but fixed ffl ? 0. Their algorithm is based on a path counting
technique, and can be viewed as a natural generalization of the simple algorithm based on counting
common neighbors (that counts paths of length 2), mentioned above.
Our main result here is a polynomial time algorithm that works for sparser random 3-colorable
graphs. If the edge probability p satisfies p - c=n, where c is a sufficiently large absolute constant,
the algorithm colors optimally the corresponding random 3-colorable graph with high probability.
This settles a problem of Blum and Spencer [6], who asked if one can design an algorithm that works
almost surely for p - polylog(n)=n. (Here, and in what follows, almost surely always means: with
probability that approaches 1 as n tends to infinity). The algorithm uses the spectral properties
of the graph and is based on the fact that almost surely a rather accurate approximation of the
color classes can be read from the eigenvectors corresponding to the smallest two eigenvalues of the
adjacency matrix of a large subgraph. This approximation can then be improved to yield a proper
coloring.
The algorithm can be easily extended to the case of k-colorable graphs, for any fixed k, and to
various models of random regular 3-colorable graphs.
We implemented our algorithm and tested it for hundreds of graphs drawn at random from
the distribution of G 3n;p;3 . Experiments show that our algorithm performs very well in practice.
The running time is a few minutes on graphs with up to 100000 nodes, and the range of edge
probabilities on which the algorithm is successful is in fact even larger than what our analysis
predicts.
1.1 The model
There are several possible models for random k-colorable graphs. See [8] for some of these models
and the relation between them. Our results hold for most of these models, but it is convenient to
focus on one, which will simplify the presentation. Let V be a fixed set of kn labelled vertices. For a
real kn;p;k be the random graph on the set of vertices V obtained as follows; first, split
the vertices of V arbitrarily into k color classes W each of cardinality n. Next, for each u
and v that lie in distinct color classes, choose uv to be an edge, randomly and independently, with
probability p. The input to our algorithm is a graph G kn;p;k obtained as above, and the algorithm
succeeds to color it if it finds a proper k coloring. Here we are interested in fixed k - 3 and large
n. We say that an algorithm colors G kn;p;k almost surely if the probability that a randomly chosen
graph as above is properly colored by the algorithm tends to one as n tends to infinity. Note
that we consider here deterministic algorithms, and the above statement means that the algorithm
succeeds to color almost all random graphs generated as above.
A closely related model to the one given above is the model in which we do not insist that the
color classes have equal sizes. In this model one first splits the set of vertices into k disjoint color
classes by letting each vertex choose its color randomly, independently and uniformly among the
possibilities. Next, one chooses every pair of vertices of distinct color classes to be an edge with
probability p. All our results hold for both models, and we focus on the first one as it is more
convenient. To simplify the presentation, we restrict our attention to the case
graphs, since the results for this case easily extend to every fixed k. In addition, we make no attempt
to optimize the constants and assume, whenever this is needed, that c is a sufficiently large contant,
and the number of vertices 3n is sufficiently large.
1.2 The algorithm
Here is a description of the algorithm, which consists of three phases. Given a graph
E), define be the graph obtained from G by deleting all edges incident
to a vertex of degree greater than 5d. Denote by A the adjacency matrix of G 0 , i.e., the 3n by
3n matrix (a uv ) u;v2V defined by a It is well known
that since A is symmetric it has real eigenvalues - 1 and an orthonormal basis
of eigenvectors e 1 . The crucial point is that almost surely one can
deduce a good approximation of the coloring of G from e 3n\Gamma1 and e 3n . Note that there are several
efficient algorithms to compute the eigenvalues and the eigenvectors of symmetric matrices (cf.,
e.g., [17]) and hence e 3n\Gamma1 and e 3n can certainly be calculated in polynomial time. For the rest of
the algorithm, we will deal with G rather than G 0 .
Let t be a non-zero linear combination of e 3n\Gamma1 and e 3n whose median is zero, that is, the
number of positive components of t as well as the number of its negative components are both at
most 3n=2. (It is easy to see that such a combination always exists and can be found efficiently.)
Suppose also that t is normalized so that it's l 2 -norm is
2n.
1=2g. This is an approximation for the
coloring, which will be improved in the second phase by iterations, and then in the third phase to
obtain a proper 3-coloring.
In iteration i of the second phase, ne, construct the color classes V i
3 as follows. For every vertex v of G, let N(v) denote the set of all its neighbors in G. In the
i-th iteration, color v by the least popular color of its neighbors in the previous iteration. That is,
put v in V i
is the minimum among the three quantities
l
where equalities are broken arbitrarily. We will show that the three sets V q
correctly color all but
vertices.
The third phase consists of two stages. First, repeatedly uncolor every vertex colored j that has
less than d=2 neighbors (in G) colored l, for some l 2 f1; 2; 3g \Gamma fjg. Then, if the graph induced on
the set of uncolored vertices has a connected component of size larger than log 3 n, the algorithm
fails. Otherwise, find a coloring of every component consistent with the rest of the graph using
brute force exhaustive search. If the algorithm cannot find such a coloring, it fails.
Our main result is the following.
Theorem 1.1 If p ? c=n, where c is a sufficiently large constant, the algorithm produces a proper
3-coloring of G with probability
The intuition behind the algorithm is as follows. Suppose every vertex in G had exactly d neighbors
in every color class other than its own. Then G G. Let F be the 2-dimensional subspace of all
vectors are constant on every color class, and whose sum is 0. A simple
calculation (as observed in [1]) shows that any non-zero element of F is an eigenvector of A with
eigenvalue \Gammad. Moreover, if E is the union of random matchings, one can show that \Gammad is almost
surely the smallest eigenvalue of A and that F is precisely the eigenspace corresponding to \Gammad.
Thus, any linear combination t of e 3n\Gamma1 and e 3n is constant on every color class. If the median of
t is 0 and its l 2 -norm is
2n, then t takes the values 0, 1 or \Gamma1 depending on the color class, and
the coloring obtained after phase 1 of the algorithm is a proper coloring. In the model specified in
Subsection 1.1 these regularity assumptions do not hold, but every vertex has the same expected
number of neighbors in every color class other than its own. This is why phase 1 gives only an
approximation of the coloring and phases 2 and 3 are needed to get a proper coloring.
We prove Theorem 1.1 in the next two sections. We use the fact that almost surely the largest
eigenvalue of G 0 is at least (1 \Gamma 2 \Gamma\Omega\Gamma d) )2d, and that its two smallest eigenvalues are at most \Gamma(1 \Gamma
d) )d and all other eigenvalues are in absolute value O(
d). The proof of this result is based
on a proper modification of techniques developed by Friedman, Kahn and Szemer'edi in [11], and
is deferred to Section 3. We show in Section 2 that it implies that each of the two eigenvectors
corresponding to the two smallest eigenvalues is close to a vector which is a constant on every color
class, where the sum of these three constants is zero. This suffices to show that the sets V 0
reasonably good approximation to the coloring of G, with high probability.
Theorem 1.1 can then be proved by applying the expansion properties of the graph G (that
hold almost surely) to show that the iteration process above converges quickly to a proper coloring
of a predefined large subgraph H of G. The uncoloring procedure will uncolor all vertices which
are wrongly colored, but will not affect the subgraph H . We then conclude by showing that the
largest connected component of the induced subgraph of G on V \Gamma H is of logarithmic size almost
surely, thereby showing that the brute-force search on the set of uncolored vertices terminates in
polynomial time. We present our implementation results in Section 4. Section 5 contains some
concluding remarks together with possible extensions and results for related models of random
graphs.
2 The proof of the main result
E) be a random 3-colorable graph generated according to the model described
above. Denote by W 1 3 the three color classes of vertices of G. Let G 0 be the graph
obtained from G by deleting all edges adjacent to vertices of degree greater than 5d, and let A
be the adjacency matrix of G 0 . Denote by - the eigenvalues of A, and by
3n the corresponding eigenvectors, chosen so that they form an orthonormal basis of
R 3n .
In this section we first show that the approximate coloring produced by the algorithm using the
eigenvectors e 3n\Gamma1 and e 3n is rather accurate almost surely. Then we exhibit a large subgraph H
and show that, almost surely, the iterative procedure for improving the coloring colors H correctly.
We then show that the third phase finds a proper coloring of G in polynomial time, almost surely.
We use the following statement, whose proof is relegated to Section 3.
Proposition 2.1 In the above notation, almost surely,
d) )d and
d) for all 2.
Remark. One can show that, when 2.1 would not hold if we were
dealing with the spectrum of G rather than that of G 0 , since the graph G is likely to contain many
vertices of degree ?? d, and in this case the assertion of (iii) does not hold for the eigenvalues of
G.
2.1 The properties of the last two eigenvectors
We show in this subsection that the eigenvectors e 3n\Gamma1 and e 3n are almost constant on every color
class. For this, we exhibit two orthogonal vectors constant on every color class which, roughly
speaking, are close to being eigenvectors corresponding to \Gammad. Let be the vector
defined by x be the vector defined
by y We denote by jjf jj the l 2 -norm of a
vector f .
Lemma 2.2 Almost surely there are two vectors
are both linear combinations of e
and e 3n .
Proof. We use the following lemma, whose proof is given below.
Lemma 2.3 Almost surely:
We prove the existence of ffi as above. The proof of the existence of ffl is analogous. Let
We show that the coefficients c are small compared to jjyjj. Indeed,
3n
where the last inequality follows from parts (i) and (iii) of Proposition 2.1. Define
By (1) and Lemma 2.3 it follows that jjffijj
O(n=d). on the other hand, y \Gamma ffi is a
linear combination of e 3n\Gamma1 and e 3n . 2
Note that it was crucial to the proof of Lemma 2.2 that, almost surely,
rather
as is the case for some vectors in f\Gamma1; 0; 1g 3n .
Proof of Lemma 2.3 To prove the first bound, observe that it suffices to show that the sum
of squares of the coordinates of dI)y on W 1 is O(nd) almost surely, as the sums on W 2 and
W 3 can be bounded similarly. The expectation of the vector dI)y is the null vector, and the
expectation of the square of each coordinate of dI)y is O(d), by a standard calculation. This
is because each coordinate of dI)y is the sum of n independent random variables, each with
mean 0 and variance O(d=n). This implies that the expected value of the sum of squares of the
coordinates of dI)y on W 1 is O(nd). Similarly, the expectation of the fourth power of each
coordinate of dI)y is O(d 2 ). Hence, the variance of the square of each coordinate is O(d 2 ).
However, the coordinates of dI)y on W 1 are independent random variables, and hence the
variance of the sum of the squares of the W 1 coordinates is equal to the sum of the variances, which
is O(nd 2 ). The first bound can now be deduced from Chebyshev's Inequality. The second bound
can be shown in a similar manner. We omit the details. 2
The vectors are independent since they are nearly orthogonal. Indeed, if
O
n=d
Thus
Therefore, by the above lemma, the two vectors
3ne
3ne 3n can be written as linear
combinations of x \Gamma ffl and y \Gamma ffi. Moreover, the coefficients in these linear combinations are all
O(1) in absolute value. This is because are nearly orthogonal, and the l 2 -norm
of each of the four vectors
3ne
3ne 3n is \Theta( p
n). More precisely, if one of
the vectors
3ne
3ne 3n is written as by the triangle inequality,
n) which, by a calculation similar to the one above, implies
that thus ff and fi are O(1). on the other hand, the
coefficients of the vector t defined in subsection 1.2 along the vectors e 3n\Gamma1 and e 3n are at most
2n. It follows that the vector t defined in the algorithm is also a linear combination of the
vectors coefficients whose absolute values are both O(1). Since both x and y
belong to the vector space F defined in the proof of Proposition 2.1, this implies that
be the value of f on W i , for 1 - i - 3. Assume without
loss of generality that ff 1 - ff 2 - ff 3 . Since jjjjj O(n=d), at most O(n=d) of the coordinates of
are greater than 0:01 in absolute value. This implies that jff 2 j - 1=4, because otherwise at least
O(n=d) coordinates of t would have the same sign, contradicting the fact that 0 is a median
of t. As ff 1
implies that ff 1 ? 3=4
and ff 3 ! \Gamma3=4. Therefore, the coloring defined by the sets V 0
agrees with the original coloring of
G on all but at most O(n=d) ! 0:001n coordinates.
2.2 The iterative procedure
Denote by H the subset of V obtained as follows. First, set H to be the set of vertices having at
most 1:01d neighbors in G in each color class. Then, repeatedly, delete any vertex in H having less
than 0:99d neighbors in H in some color class (other than its own.) Thus, each vertex in H has
roughly d neighbors in H in each color class other than its own.
Proposition 2.4 Almost surely, by the end of the second phase of the algorithm, all vertices in H
are properly colored.
To prove Proposition 2.4, we need the following lemma.
Lemma 2.5 Almost surely, there are no two subsets of vertices U and W of V such that jU j -
0:001n, and every vertex v of W has at least d=4 neighbors in U .
Proof. Note that if there are such two (not necessarily disjoint) subsets U and W , then the number
of edges joining vertices of U and W is at least djW j=8. Therefore, by a standard calculation, the
probability that there exist such two subsets is at most
3n
!/
3n
!/
di=8
d) ):If a vertex in H is colored incorrectly at the end of iteration i of the algorithm in phase 2 (i.e.
if it is colored j and does not belong to W j ), it must have more than d=4 neighbors in H colored
incorrectly at the end of iteration To see this, observe that any vertex of H has at most
neighbors outside H , and hence if it has at most d=4 wrongly colored
neighbors in H , it must have at least 0:99d \Gamma d=4 ? d=2 neighbors of each color other than its
correct color and at most d=4 0:04d neighbors of its correct color. By repeatedly applying the
property asserted by the above lemma with U being the set of vertices of H whose colors in the
end of the iteration are incorrect, we deduce that the number of incorrectly colored vertices
decreases by a factor of two (at least) in each iteration, implying that all vertices of H will be
correctly colored after dlog 2 ne iterations. This completes the proof of Proposition 2.4. We note
that by being more careful one can show that O(log d n) iterations suffice here, but since this only
slightly decreases the running time we do not prove the stronger statement here. 2
A standard probabilistic argument based on the Chernoff bound (see, for example, [2, Appendix
A]) shows that almost surely if p - fi log n=n, where fi is a suitably large constant. Thus,
it follows from Proposition 2.4 that the algorithm almost surely properly colors the graph by the
end of Phase 2 if p - fi log n=n.
For two sets of vertices X and Z, let e(X; Z) denote the number of edges (u; v) 2 E, with
and v 2 Z.
Lemma 2.6 There exists a constant fl ? 0 such that almost surely the following holds.
(i) For any two distinct color classes V 1 and V 2 , and any subset X of V 1 and any subset Y of V 2 ,
(ii) If J is the set of vertices having more than 1:01d neighbors in G in some color class, then
Proof For any subset X of is the sum of independent Bernoulli variables. By
standard Chernoff bounds, the probability that there exist two color classes V 1 and V 2 , a subset X
of V 1 and a subset Y of V 2 such that jX
is at most/
ffln
Therefore, (i) holds almost surely if fl is a sufficiently small constant. A similar reasoning applies
to (ii). Therefore, both (i) and (ii) hold if fl is a sufficiently small constant. 2
Lemma 2.7 Almost surely, H has at least (1 \Gamma 2 \Gamma\Omega\Gamma d) )n vertices in every color class.
Proof. It suffices to show that there are at most 7 \Delta 2 \Gammafl d n vertices outside H . Assume for
contradiction that this is not true. Recall that H is obtained by first deleting all the vertices in
J , and then by a deletion process in which vertices with less than 0:99d neighbors in the other
color classes of H are deleted repeatedly. By Lemma 2.6 jJ almost surely, and so at
least 6 \Delta 2 \Gammafl d n vertices have been deleted because they had less than 0:99d neighbors in H in some
color class (other than their own.) Consider the first time during the deletion process where there
exists a subset X of a color class V i of cardinality 2 \Gammafl d n, and a j 2 f1; 2; 3g \Gamma fig such that every
vertex of X has been deleted because it had less than 0:99d neighbors in the remaining subset of
Y be the set of vertices of V j deleted so far. Then jY j. Note that every
vertex in X has less than 0:99d neighbors in We therefore get a contradiction by applying
Lemma 2.6 to (X; Y ). 2
2.3 The third phase
We need the following lemma, which is an immediate consequence of Lemma 2.5.
Lemma 2.8 Almost surely, there exists no subset U of V of size at most 0:001n such that the
graph induced on U has minimum degree at least d=2.
Lemma 2.9 Almost surely, by the end of the uncoloring procedure in Phase 3 of the algorithm, all
vertices of H remain colored, and all colored vertices are properly colored, i.e. any vertex colored i
belongs to W i . (We assume, of course, that the numbering of the colors is chosen appropriately).
Proof. By Proposition 2.4 almost surely all vertices of H are properly colored by the end of
Phase 2. Since every vertex of H has at least 0:99d neighbors (in H) in each color class other
than its own, all vertices of H remain colored. Moreover, if a vertex is wrongly colored at the
end of the uncoloring procedure, then it has at least d=2 wrongly colored neighbors. Assume for
contradiction that there exists a wrongly colored vertex at the end of the uncoloring procedure.
Then the subgraph induced on the set of wrongly colored vertices has minimum degree at least
d=2, and hence it must have at least 0:001n vertices by Lemma 2.8. But, since it does not intersect
H , it has at most 2 \Gamma\Omega\Gamma d) n vertices by Lemma 2.7, leading to a contradiction. 2
In order to complete the proof of correctness of the algorithm, it remains to show that almost
surely every connected component of the graph induced on the set of uncolored vertices is of size
at most log 3 n. We prove this fact in the rest of this section. We note that it is easy to replace
the term log 3 n by O( log 3 n
d ), but for our purposes the above estimate suffices. Note also that if
some of these components are actually components of the original graph G, as for
such value of p the graph G is almost surely disconnected (and has many isolated vertices).
Lemma 2.10 Let K be a graph, partition of the vertices of K into three disjoint
subsets, i an integer, and L the set of vertices of K that remain after repeatedly deleting the
vertices having less than i neighbors in V 1 , V 2 or V 3 . Then the set L does not depend on the order
in which vertices are deleted.
Proof Let L be the set of vertices that remain after a deletion process according to a given order.
Consider a deletion process according to a different order. Since every vertex in L has at least
neighbors in no vertex in L will be deleted in the second deletion
process (otherwise, we get a contradiction by considering the first vertex in L deleted.) Therefore,
the set of vertices that remain after the second deletion process contains L, and thus equals L by
symmetry. 2
Lemma 2.10 implies that H does not depend on the order in which vertices are deleted.
Proposition 2.11 Almost surely the largest connected component of the graph induced on
has at most log 3 n vertices.
Proof. Let T be a fixed tree on log 3 n vertices of V all of whose edges have their two endpoints
in distinct color classes W i , Our objective is to estimate the probability that G
contains T as a subgraph that does not intersect H , and show that this probability is sufficiently
small to ensure that almost surely the above will not occur for any T . This property would certainly
hold were a random subset of V of cardinality 2 \Gamma\Omega\Gamma d) n. Indeed, if this were the case, the
probability that G contains T as a subgraph that does not intersect H would be upper bounded
by the probability 2 \Gamma\Omega\Gamma djT that T is a subset of times the probability (d=n) jT j\Gamma1 that T is
a subgraph of G. This bound is sufficiently small for our needs. Although H is not a random
subset of V , we will be able to show a similar bound on the probability that G contains T as a
subgraph that does not intersect H . To simplify the notation, we let T denote the set of edges of
the tree. Let V (T ) be the set of vertices of T , and let I be the subset of all vertices
degree in T is at most 4. Since T contains jV be the subset
of V obtained by the following procedure, which resembles that of producing H (but depends on
to be the set of vertices having at most 1:01d \Gamma 4 neighbors in G in each
color class V i . Then delete from H 0 all vertices of V (T repeatedly, delete any vertex in
having less than 0:99d neighbors in H 0 in some color class (other than its own.)
Lemma 2.12 Let F be a set of edges, each having endpoints in distinct color classes W i , W j . Let
be the set obtained by replacing E by F [T in our definition of H, and H 0 be the set
obtained by replacing E by F in our definition of H 0 . Then H
Proof. First, we show that the initial value of H 0 obtained after deleting the vertices
with more than 1:01d \Gamma 4 neighbors in a color class of G and after deleting the vertices in V (T
is a subset of the initial value of H(F [T ). Indeed, let v be any vertex that does not belong to the
initial value of H(F [ T ), i.e. v has more than 1:01d neighbors in some color class of (V; F [ T ).
We distinguish two cases:
1. I . In this case, v does not belong to the initial value of H 0
2. I . Then v is incident with at most 4 edges of T , and so it has more than 1:01d \Gamma 4
neighbors in some color class in (V; F ).
In both cases, v does not belong to the initial value of H 0 This implies the assertion of the
lemma, since the initial value of H 0 subgraph of the initial value of H(F [ T ) and hence,
by Lemma 2.10, any vertex which will be deleted in the deletion process for constructing H will be
deleted in the corresponding deletion process for producing H 0 as well. 2
Lemma 2.13
Pr [T is a subgraph of G and V (T is a subgraph of G] Pr
Proof. It suffices to show that
is a subgraph of G] - Pr
But, by Lemma 2.12,
F :I"H(F[T)=;
Pr
Pr
is a subgraph of G
is a subgraph of G];
where F ranges over the sets of edges with endpoints in different color classes, and F 0 ranges
over those sets that do not intersect T . The third equation follows by regrouping the edge-sets
F according to F noting (the obvious fact) that, for a given set F 0 that does
not intersect T , the probability that F such that is equal to
The fourth equation follows from the independence of the events
F 0 and T is a subgraph of G. 2
Returning to the proof of Proposition 2.11 we first note that we can assume without loss of
generality that d - fi log n, for some constant fi ? 0 (otherwise .) If this inequality holds
by modifying the arguments in the proof of Lemma 2.7, one can show that each of the graphs
(corresponding to the various choices of V (T at most 2
d) n vertices in each color
class, with probability at least 1 \Gamma 2 \Gamman \Theta(1)
. Since the distribution of H 0 depends only on V (T
(assuming the W i 's are fixed), it is not difficult to show that this implies that Pr [I " H
at most 2
\Gamma\Omega\Gamma djIj) . Since jI j - jV (T )j=2 and since the probability that T is a subgraph of G is
precisely (d=n) jV (T )j\Gamma1 we conclude, by Lemma 2.13, that the probability that there exists some T
of size log 3 n which is a connected component of the induced subgraph of G on H is at most\Gamma\Omega\Gamma d log 3 n) (d=n) log 3 multiplied by the number of possible trees of this size, which is
3n
log 3 n
(log 3 n) log 3
Therefore, the required probability is bounded by
3n
log 3 n
(log 3 n) log 3 n\Gamma2 2 \Gamma\Omega\Gamma d log 3 n) ( d
d) );
completing the proof. 2
3 Bounding the eigenvalues
In this section, we prove Proposition 2.1. Let 3 be as in Section
2. We start with the following lemma.
Lemma 3.1 There exists a constant fi ? 0 such that, almost surely, for any subset X of 2 \Gammafi d n
vertices,
Proof As in the proof of Lemma 2.6, the probability that there exists a subset X of cardinality
ffln such that e(X; is at most
3n
ffln
if log(1=ffl) ! d=b, where b is a sufficiently large constant. Therefore, if fi is a sufficiently small
constant, this probability goes to 0 as n goes to infinity. 2
Proof of Proposition 2.1
Parts (i) and (ii) are simple. By the variational definition of eigenvalues (see [19, p. 99]), - 1
is simply the maximum of x t Ax=(x t x) where the maximum is taken over all nonzero vectors x.
Therefore, by taking x to be the all 1 vector we obtain the well known result that - 1 is at least
the average degree of G 0 . By the known estimates for Binomial distributions, the average degree
of G is (1 + o(1))2d. on the other hand, Lemma 3.1 can be used to show that
as it easily implies that the number of vertices of degree greater than 5d in each color class of G
is almost surely less than 2 \Gammafi d n. Hence, the average degree of G 0 is at least (1 \Gamma 2 \Gamma\Omega\Gamma d) )2d. This
proves (i).
The proof of (ii) is similar. It is known [19, p. 101] that
where the minimum is taken over all two dimensional subspaces F of R 3n . Let F denote the 2-
dimensional subspace of all vectors
denotes the number of edges of G 0 between W i and W j . Almost surely e 0
d) )nd for all 1
follows that x t Ax=(x t x) - \Gamma(1 \Gamma 2 \Gamma\Omega\Gamma d) )d almost surely for all x 2 F , implying that - 3n -
d) )d, and establishing (ii).
The proof of (iii) is more complicated. Its assertion for somewhat bigger p (for example, for
can be deduced from the arguments of [10]. To prove it for the graph G 0 and p - c=n
we use the basic approach of Kahn and Szemer'edi in [11], where the authors show that the second
largest eigenvalue in absolute value of a random d-regular graph is almost surely O(
d). (See also
[9] for a different proof.) Since in our case the graph is not regular a few modifications are needed.
Our starting point is again the variational definition of the eigenvalues, from which we will deduce
that it suffices to show that almost surely the following holds.
Lemma 3.2 Let S be the set of all unit vectors
d) for all x 2 S.
The matrix A consists of nine blocks arising from the partition of its rows and columns according
to the classes W j . It is clearly sufficient to show that the contribution of each block to the sum
x t Ax is bounded, in absolute value, by O(
d). This, together with a simple argument based on
ffl-nets (see [11], Proposition 2.1) can be used to show that Lemma 3.2 follows from the following
statement.
denote the set of all vectors x of length n every coordinate of
which is an integral multiple of ffl=
n, where the sum of coordinates is zero and the l 2 -norm is at
most 1. Let B be a random n by n matrix with 0; 1 entries, where each entry of B, randomly and
independently, is 1 with probability d=n.
Lemma 3.3 If d exceeds a sufficiently large absolute constant then almost surely, jx t Byj - O(
d)
for every x; y 2 T for which x if the corresponding row of B has more than 5d nonzero entries
and y if the corresponding column of B has more than 5d nonzero entries.
The last lemma is proved, as in [11], by separately bounding the contribution of terms x u y v with
small absolute values and the contribution of similar terms with large absolute values. Here is a
description of the details that differ from those that appear in [11]. Let C denote the set of all
pairs (u; v) with jx u y
d=n and let
. As in [11] one can show that
the absolute value of the expectation of X is at most
d. Next one has to show that with high
probability X does not deviate from its expectation by more than c
d. This is different (and
in fact, somewhat easier) than the corresponding result in [11], since here we are dealing with
independent random choices. It is convenient to use the following variant of the Chernoff bound.
Lemma 3.4 Let a am be (not necessarily positive) reals, and let Z be the random variable
randomly and independently, to be 1 with probability p
and 0 with probability
ce c pD for some positive
constants c; S. Then Pr
For the proof, one first proves the following.
Lemma 3.5 Let c be a positive real. Then for every x - c,
Proof. Define
Therefore, f 00 (x) - 0 for all x - c and as f 0 shows that f 0
implying that f(x) is nonincreasing for x - 0 and nondecreasing for
Proof of Lemma 3.4.
e c pD , then, by assumption, -a i - c for all i. Therefore, by the
above lemma,
E(e -Z
Y
[pe
Y
Y
e c
e c
a 2
Therefore,
Applying the same argument to the random variable defined with respect to the reals \Gammaa i , the
assertion of the lemma follows. 2
Using Lemma 3.4 it is not difficult to deduce that almost surely the contribution of the pairs in
C to jx t Byj is O(
d). This is because we can simply apply the lemma with with the a i 's
being all the terms x u y v where (u; ce c
d for some c ? 0.
Since here jSa ce c p
d
ce c pD, we conclude that for every fixed vectors x and y in T , the
probability that X deviates from its expectation (which is O(
d)) by more than ce c
d is smaller
than 2e \Gammac 2 e c n=2 , and since the cardinality of T is only b n for some absolute constant
can choose c so that X would almost surely not deviate from its expectation by more than ce c p
d.
The contribution of the terms x u y v whose absolute values exceed
d=n can be bounded by
following the arguments of [11], with a minor modification arising from the fact that the maximum
number of ones in a row (or column) of B can exceed d (but can never exceed 5d in a row or a
column in which the corresponding coordinates x u or y v are nonzero). We sketch the argument
below. We start with the following lemma.
Lemma 3.6 There exists a constant C such that, with high probability, for any distinct color classes
any subset U of V 1 and any subset W of V 2 such that jU j - jW j, at least one of the
following two conditions hold:
2. e 0
is the number of edges in G 0 between U and W , and -(U; W jd=n is the
expected number of edges in G between U and W .
Proof. Condition 1 is clearly satisfied if jW j - n=2, since the maximum degree in G 0 is at most
5d. So we can assume without loss of generality that Give two subsets U and
W satisfying the requirements of the lemma, define to be the unique positive real
number such that fi-(U; W ) log will be determined later.)
Condition 2 is equivalent to e 0 (U; W ) - fi-(U; W ). Thus U; W violate Condition 1 as well as
Condition 2 only if e 0 Hence, by standard Chernoff
bounds, the probability of this event is at most e \Gammafl
absolute
constant Denoting jW j=n by b, the probability that there exist two subsets U and W that
do not satisfy either condition is at mostX
b: bn integer -n=2
bn
b: bn integer -n=2
if C is a sufficiently large constant. 2
Kahn and Szemer'edi [11] show that for any d-regular graph satisfying the conditions of Lemma 3.6
(without restriction on the ranges of U and W ), the contribution of the terms x u y v whose absolute
values exceed
d=n is O(
d). Up to replacing some occurences of d by 5d, the same proof shows
that, for any 3-colorable graph of maximum degree 5d satisfying the conditions of Lemma 3.6, the
contribution of the terms x u y v whose absolute values exceed
d=n is O(
d). This implies the
assertion of Lemma 3.3, which implies Lemma 3.2.
To deduce (iii), we need the following lemma.
Lemma 3.7 Let F denote, as before, the 2-dimensional subspace of all vectors
satisfying x almost surely, for all f 2 F we
have
Proof. Let x; y be as in the proof of Lemma 2.3. Note that x t and that both jjxjj 2 and jjyjj 2
are \Theta(n). Thus every vector f 2 F can be expressed as the sum of two orthogonal vectors x 0 and
proportional to x and y respectively. Lemma 2.3 shows that
implies that Similarly, it can be shown
that We conclude the proof of the lemma using the triangle inequality
We now show that - 2 - O(
d) by using the formula -
H ranges over the linear subspaces of R 3n of codimension 1. Indeed, let H be the set of vectors
whose sum of coordinates is 0. Any x 2 H is of the form f and s is a multiple of
a vector in S, and so
As
As
Number of vertices d
1000 12
100000 8
Figure
1: Implementation results.
O(
djjsjjjjf
O(
O(
This implies the desired upper bound on - 2 .
The bound j- 3n\Gamma2 j - O(
d) can be deduced from similar arguments, namely by showing that
d), for any x 2 F ? . This completes the proof of Proposition 2.1. 2
4 Implementation and Experimental Results.
We have implemented the following tuned version of our algorithm. The first two phases are as
described in Section 1. In the third phase, we find the minimum i such that, after repeatedly un-
coloring every vertex colored j that has less than i neighbors colored l, for some l 2 f1; 2; 3g \Gamma fjg,
the algorithm can find a proper coloring using brute force exhaustive search on every component
of uncolored vertices. If the brute force search takes more steps than the first phase (up to a multiplicative
constant), the algorithm fails. Otherwise, it outputs a legal coloring. The eigenvectors
e 3n and e 3n\Gamma1 are calculated approximately using an iterative procedure. The coordinates of the
initial vectors are independent random variables uniformly chosen in [0; 1].
The range of values of p where the algorithm succeeded was in fact considerably larger than
what our analysis predicts. Figure 1 shows some values of the parameters for which we tested our
algorithm. For each of these parameters, the algorithm was run on more than a hundred graphs
drawn from the corresponding distribution, and found successfully a proper coloring for all these
tests. The running time was a few minutes on a Sun SPARCstation 2 for the largest graphs. The
algorithm failed for some graphs drawn from distributions with smaller integral values of d than
the one in the corresponding row. Note that the number of vertices is not a multiple of 3; the size
of one color class exceeds the others by one.
Concluding remarks
1. There are many heuristic graph algorithms based on spectral techniques, but very few rigorous
proofs of correctness for any of those in a reasonable model of random graphs. Our main
result here provides such an example. Another example is the algorithm of Boppana [7],
who designed an algorithm for graph bisection based on eigenvalues, and showed that it finds
the best bisection almost surely in an appropriately defined model of random graphs with a
relatively small bisection width. Aspvall and Gilbert [1] gave a heuristic for graph coloring
based on eigenvectors of the adjacency matrix, and showed that their heuristic optimally colors
complete 3-partite graphs as well as certain other classes of graphs with regular structure.
2. By modifying some of the arguments of Section 2 we can show that if p is somewhat bigger
(p - log 3 n=n suffices) then almost surely the initial coloring V 0
i that is computed from
the eigenvectors e 3n\Gamma1 and e 3n in the first phase of our algorithm is completely correct. In
this case the last two phases of the algorithm are not needed. By refining the argument in
Subsection 2.2, it can also be shown that if log n=n the third phase of the algorithm is
not needed, and the coloring obtained by the end of the second phase will almost surely be
the correct one.
3. We can show that a variant of our algorithm finds, almost surely, a proper coloring in the
model of random regular 3-colorable graphs in which one chooses randomly d perfect matchings
between each pair of distinct color classes, when d is a sufficiently large absolute constant.
Here, in fact, the proof is simpler, as the smallest two eigenvalues (and their corresponding
are known precisely, as noted in Subsection 1.2.
4. The results easily extend to the model in which each vertex first picks a color randomly,
independently and uniformly, among the three possibilities, and next every pair of vertices of
distinct colors becomes an edge with probability p (? c=n).
5. If positive constant c, it is not difficult to show
that almost surely G does not have any subgraph with minimum degree at least 3, and hence
it is easy to 3-color it by a greedy-type (linear time) algorithm. For values of p which are
bigger than this c=n but satisfy n=n), the graph G is almost surely disconnected,
and has a unique component of \Omega\Gamma n) vertices, which is called the giant component in the
study of random graphs (see, e.g., [2], [4]). All other components are almost surely sparse,
i.e., contain no subgraph with minimum degree at least 3, and can thus be easily colored in
total linear time. Our approach here suffices to find, almost surely, a proper 3-coloring of the
giant component (and hence of the whole graph) for all p - c=n, where c is a sufficiently large
absolute constant, and there are possible modifications of it that may even work for all values
of p. At the moment, however, we are unable to obtain an algorithm that provably works
for all values of p almost surely. Note that, for any constant c, if p ! c=n then the greedy
algorithm will almost surely color G 3n;p;3 with a constant number of colors. Thus, our result
implies that G 3n;p;3 can be almost surely colored in polynomial time with a constant number
of colors for all values of p.
6. Our basic approach easily extends to k-colorable graphs, for every fixed k, as follows. Phase
2 and Phase 3 of the algorithm are essentially the same as in the case needs
to be modified to extract an approximation of the coloring. Let e i , i - 1, be an eigenvector of
corresponding to its ith largest eigenvalue (replace 5d by 5kd in the definition of G 0 .) Find
vectors
kn in
and z, let W ffl be the set of
vertices whose coordinates in x i are in (z \Gamma ffl; z ffl). If, for some i and z, both jW ffl k j and
deviate from n by at most fi k n=d, where ffl k and fi k are constants depending on k, color
the elements in W ffl k
with a new color and delete them from the graph. Repeat this process
until the number of vertices left is O(n=d), and color the remaining vertices in an arbitrary
manner.
7. The existence of an approximation algorithm based on the spectral method for coloring arbitrary
graphs is a question that deserves further investigation (which we do not address here.)
Recently, improved approximation algorithms for graph coloring have been obtained using
semidefinite programming [14], [5].
Acknowledgement
We thank two anonymous referees for several suggestions that improved the
presentation of the paper.
--R
Graph coloring using eigenvalue decomposition
The Probabilistic Method
Some tools for approximate 3-coloring
Journal of Algorithms 19
Eigenvalues and graph bisection: An average case analysis
The solution of some random NP-Hard problems in polynomial expected time
on the second eigenvalue and random walks in random d-regular graphs
on the second eigenvalue in random regular graphs
Computers and intractability: a guide to the theory of NP-completeness
Reducibility among combinatorial problems
Approximate graph coloring by semidefinite program- ming
Expected behavior of graph colouring algorithms
A randomised 3-colouring algorithm
A First Course in Numerical Analysis
Almost all k-colorable graphs are easy to color
The Algebraic Eigenvalue Problem
--TR
--CTR
David Eppstein, Improved algorithms for 3-coloring, 3-edge-coloring, and constraint satisfaction, Proceedings of the twelfth annual ACM-SIAM symposium on Discrete algorithms, p.329-337, January 07-09, 2001, Washington, D.C., United States
Amin Coja-Oghlan, A spectral heuristic for bisecting random graphs, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Amin Coja-oghlan, The Lovsz Number of Random Graphs, Combinatorics, Probability and Computing, v.14 n.4, p.439-465, July 2005
Michael Krivelevich , Dan Vilenchik, Solving random satisfiable 3CNF formulas in expected polynomial time, Proceedings of the seventeenth annual ACM-SIAM symposium on Discrete algorithm, p.454-463, January 22-26, 2006, Miami, Florida
Richard Beigel , David Eppstein, 3-coloring in time O(1.3289n), Journal of Algorithms, v.54 n.2, p.168-204, February 2005
Abraham D. Flaxman , Alan M. Frieze, The diameter of randomly perturbed digraphs and some applications, Random Structures & Algorithms, v.30 n.4, p.484-504, July 2007
Amin Coja-oghlan , Andreas Goerdt , Andr Lanka, Strong Refutation Heuristics for Random k-SAT, Combinatorics, Probability and Computing, v.16 n.1, p.5-28, January 2007
Amin Coja-Oghlan , Andreas Goerdt , Andr Lanka , Frank Schdlich, Techniques from combinatorial approximation algorithms yield efficient algorithms for random 2k-SAT, Theoretical Computer Science, v.329 n.1-3, p.1-45, 13 December 2004
Paul Beame , Joseph Culberson , David Mitchell , Cristopher Moore, The resolution complexity of random graphk-colorability, Discrete Applied Mathematics, v.153 n.1, p.25-47, 1 December 2005 | graph eigenvalues;graph coloring;algorithms;random graphs |
270948 | Vienna-Fortran/HPF Extensions for Sparse and Irregular Problems and Their Compilation. | AbstractVienna Fortran, High Performance Fortran (HPF), and other data parallel languages have been introduced to allow the programming of massively parallel distributed-memory machines (DMMP) at a relatively high level of abstraction, based on the SPMD paradigm. Their main features include directives to express the distribution of data and computations across the processors of a machine. In this paper, we use Vienna-Fortran as a general framework for dealing with sparse data structures. We describe new methods for the representation and distribution of such data on DMMPs, and propose simple language features that permit the user to characterize a matrix as "sparse" and specify the associated representation. Together with the data distribution for the matrix, this enables the compiler and runtime system to translate sequential sparse code into explicitly parallel message-passing code. We develop new compilation and runtime techniques, which focus on achieving storage economy and reducing communication overhead in the target program. The overall result is a powerful mechanism for dealing efficiently with sparse matrices in data parallel languages and their compilers for DMMPs. | Introduction
During the past few years, significant efforts have been undertaken by academia, government
laboratories and industry to define high-level extensions of standard programming languages, in
particular Fortran, to facilitate data parallel programming on a wide range of parallel architectures
without sacrificing performance. Important results of this work are Vienna Fortran [10, 28],
Fortran D [15] and High Performance Fortran (HPF) [18], which is intended to become a de-facto
standard. These languages extend Fortran 77 and Fortran 90 with directives for specifying alignment
and distribution of a program's data among the processors, thus enabling the programmer
to influence the locality of computation whilst retaining a single thread of control and a global
name space. The low-level task of mapping the computation to the target processors in the
framework of the Single-Program-Multiple-Data (SPMD) model, and of inserting communication
for non-local accesses is left to the compiler.
HPF-1, the original version of High Performance Fortran, focussed its attention on regular com-
putations, and on providing a set of basic distributions (block, cyclic and replication). Although
the approved extensions of HPF-2 include facilities for expressing irregular distributions using
INDIRECT , no special support for sparse data structures has been proposed.
In this paper, we consider the specific requirements for sparse computations as they arise in a
variety of problem areas such as molecular dynamics, matrix decompositions, solution of linear
systems, image reconstruction and many others.
In order to parallelize sequential sparse codes effectively, three fundamental issues must be
addressed:
1. We must distribute the data structures typically used in such codes.
2. It is necessary to generalize the representation of sparse matrices on a single processor to
distributed-memory machines in such a way that the savings in memory and computation
are also achieved in the parallel code.
3. The compiler must be able to adapt the global computation to the local computation on
each processor, resolving the additional complexity that sparse methods introduce.
This paper presents an approach to solve these three problems. First, a new data type has
been introduced in the Vienna Fortran language for representing sparse matrices. Then, data
distributions have been explicitly designed to map this data type onto the processors in such a
way that we can exploit the locality of sparse computations and preserve a compact representation
of matrices and vectors, thereby obtaining an efficient workload balance and minimizing
communication. Some experiments in parallelizing sparse codes by hand [2], not only confirmed
the suitability of these distributions, but also the excessive amount of time spent during the
development and debugging stages of manual parallelization.
This encouraged us to build a compiler to specify these algorithms in a high-level data-parallel
language. In this way, new elements were introduced to Vienna Fortran to extend its functionality
and expressivity for irregular problems. Subsequently, compiler and runtime techniques were
developed to enable specific optimizations to handle typical features of sparse code, including
indirect array accesses and the appearance of array elements in loop bounds.
The result is a powerful mechanism for storing and manipulating sparse matrices, which can
be used in a data-parallel compiler to generate efficient SPMD programs for irregular codes of
this kind. In this paper, we assume the representation and distribution of sparse data to be
invariant. However, the fact the representation for sparse data is computed at runtime simplifies
the additional support for handling more complex features such as dynamic redistribution or the
matrix fill-in (i.e., the runtime insertion of additional non-zero elements into the sparse matrix).
The rest of the paper is organized as follows. Section 2 introduces some basic formalism and
background for handling sparse matrices. Section 3 presents several data distributions for sparse
problems; section 4 describes new directives for the specification of these distributions in the Vienna
Fortran language. Section 5 and 6 respectively outline the runtime support and compilation
technology required for the implementation of these features. Sections 7 and 8 present experimental
results; we finish in Sections 9 and 10 with a discussion of related work and conclusions.
Representing Sparse Matrices on Distributed-Memory Machines
A matrix is called sparse if only a small number of its elements are non-zero. A range of
methods have been developed which enable sparse computations to be performed with considerable
savings in terms of both memory and computation [16]. Solution schemes are often optimized to
take advantage of the structure within the matrix.
This has consequences for parallelization. Firstly, we want to retain as much of these savings
as possible in the parallel code. Secondly, in order to achieve a good load balance at runtime, it
is necessary to understand how this can be achieved in terms of the data structures which occur
in sparse problem formulations.
In this section, we discuss methods for representing sparse matrices on distributed-memory
machines. We assume here that the reader is familiar with the basic distribution functions of
Vienna Fortran and HPF [10, 28, 18], namely BLOCK and CYCLIC(K).
Throughout this paper, we denote the set of target processors by PROCS and assume that the
data is being distributed to a two-dimensional mesh PROCS of X Y processors, numbered from
0 in each dimension. Specifically, we assume that the Vienna Fortran or HPF code will include
the following declaration:
Note that this abstract processor declaration does not imply any specific topology of the actual
processor interconnection network.
2.1 Basic Notation and Terminology
Each array A is associated with an index domain which we denote by I A . A (replication-free)
distribution of A is a total function that maps each array element to a
processor which becomes the owner of the element and, in this capacity, stores the element in its
local memory. Further, for any processor p 2 PROCS we denote by - A (p) the set of all elements
of A which are local to p; this is called the local segment of A in p.
In the following, we will assume that A is a two-dimensional real array representing a sparse
matrix, and declared with index domain I Most of the notation introduced
below will be related to A, without explicitly reflecting this dependence. We begin by defining a
set of auxiliary functions.
Definition 1 1. The symbolic matrix associated with A is a total predicate,
ftrue; falseg, such that for all i 2 I,
2. ff :=j fiji 2 I -(i)g j specifies the number of matrix elements with a non-zero value.
is a bijective enumeration of A, which numbers all elements of A
consecutively in some order, starting with 1.
4. Assume an enumeration - to be selected. I is a total function such that
is the t-th index under - which is associated with a
non-zero element of A.
By default, we will use an enumeration in the following which numbers the elements of A row-
wise, i.e., we will assume -(i;
When a sparse matrix is mapped to a distributed-memory machine, our approach will require
two kinds of information to be specified by the user. These are:
1. The representation of the sparse matrix on a single processor. This is called a sparse
format.
2. The distribution of the matrix across the processors of the machine.
In this context, the concept of a distribution is used as if the matrix was dense.
The combination of a sparse format with a distribution will be called a distributed sparse
representation of the matrix.
2.2 Sparse Formats
Before we discuss data distribution strategies for sparse data, we must understand how such
data is usually represented on a single processor. Numerous storage formats have been proposed
in sparse-matrix literature; for our work we have used the very commonly used CRS (Compressed
Row Storage) format; the same approach can be extended to CCS (Compress Column Storage)
just swapping rows and columns in the text.
In the following, we will for simplicity only consider sparse matrices with real elements; this can
be immediately generalized to include other element types such as logical, integer, and complex.
Definition 2 The Compressed Row Storage (CRS) sparse format is determined by a triple
of functions, (DA,CO,RO):
total, the data function, is defined by DA(t) := A(-(t)) for all t
. 1
denotes the set of real numbers.
2. CO total, the column function, is defined by CO(t) := -(t):2 for all
total, the row function, is defined as follows:
(a) Let i denote an arbitrary row number. Then RO(i) := at
least one t with the specified property exists; otherwise RO(i) := RO(i 1).
These three functions can be represented in an obvious way as vectors of ff real numbers (the
data vector), ff column numbers (the column vector), and numbers in the range
row vector) respectively (see Figure 1.b). The data vector stores the non-zero
values of the matrix, as they are traversed in a row-wise fashion. The column vector stores the
column indices of the elements in the data vector. Finally, the row vector stores the indices in the
data vector that correspond to the first non-zero element of each row (if such an element exists).
The storage savings achieved by this approach is usually significant. Instead of storing n m
elements, we need only locations.
Sparse matrix algorithms designed for the CRS format typically use a nested loop, with the
outer loop iterating over the rows of the matrix and an inner loop iterating over the non-zeros
in that row (see examples in Section 4). Matrix elements are identified using a two-dimensional
index set, say (i,jj) , where i denotes the i-th row of the matrix and jj denotes the jj-th non-zero
in that row. The matrix element referred to by (i,jj) is the one at row number R, column
number CO(RO(i)+jj) and has the non-zero value stored in DA(RO(i)+jj) .
The heavy use of indirect accesses that sparse representations require introduces a major source
of complexity and inefficiency when parallelizing these codes on distributed-memory machines. A
number of optimizations will be presented later on to overcome this.
3 Distributed Sparse Representations
Let A denote a sparse matrix as discussed above, and ffi be an associated distribution. A
distributed sparse representation for A results from combining ffi with a sparse format. This
is to be understood as follows: The distribution ffi is interpreted in the conventional sense, i.e., as
2 For a pair z = (x; y) of numbers, y.
if A were a dense Fortran array: ffi determines a locality function, -, which, for each p 2 PROCS,
specifies the local segment -(p). Each -(p) is again a sparse matrix. The distributed sparse
representation of A is then obtained by constructing a representation of the elements in -(p),
based on the given sparse format. That is, DA, CO, and RO are automatically converted to the
sets of vectors DA p , CO p , and RO p , p 2 PROCS. Hence the parallel code will save computation
and storage using the very same mechanisms that were applied in the original program.
For the sparse format, we use CRS to illustrate our ideas. For the data distributions, we introduce
two different schemes in subsequent sections, both decomposing the sparse global domain
into as many sparse local domains as required.
3.1 Multiple Recursive Decomposition (MRD)
Common approaches for partitioning unstructured meshes while keeping neighborhood properties
are based upon coordinate bisection, graph bisection and spectral bisection [8, 19]. Spectral
bisection minimizes communication, but requires huge tables to store the boundaries of each local
region and an expensive algorithm to compute it. Graph bisection is algorithmically less expen-
sive, but also requires large data structures. Coordinate bisection significantly tends to reduce
the time to compute the partition at the expense of a slight increase in communication time.
Binary Recursive Decomposition (BRD), as proposed by Berger and Bokhari [4], belongs to
the last of these categories. BRD specifies a distribution algorithm where the sparse matrix A is
recursively bisected, alternating vertical and horizontal partitioning steps until there are as many
submatrices as processors. Each submatrix is mapped to a unique processor. A more flexible
variant of this algorithm produces partitions in which the shapes of the individual rectangles are
optimized with respect to a user-determined function [7].
In this section, we define Multiple Recursive Decomposition (MRD), a generalization of the
BRD method, which also improves the communication structure of the code.
We again assume the processor array to be declared as
be the prime factor decomposition for X Y , ordered in such a way that
the prime factors of X, sorted in descending order, come first and are followed by the factors of
Y, sorted in the same fashion.
The MRD distribution method produces an X Y partition of matrix A in k steps, recursively
performing horizontal divisions of the matrix for the prime factors of X, and vertical ones for the
prime factors of Y :
Matrix A is partitioned into P 1 submatrices in such a way that the non-zero elements
are spread across the submatrices as evenly as possible. When a submatrix is partitioned
horizontally, any rows with no non-zero entries which are not uniquely assigned to either
partition are included in the lower one; in a vertical step, such columns are assigned to the
right partition.
Each submatrix resulting from step i-1 is partitioned into P i submatrices
using the same criteria as before.
When this process terminates, we have created Q k
submatrices. We enumerate
these consecutively from 0 to X using a horizontal ordering scheme. Now the
submatrix with number r mapped to processor PROCS(r,s). 2
This distribution defines the local segment of each processor as a rectangular matrix which
preserves neighborhood properties and achieves a good load balance (see Figure 2). The fact that
we perform all horizontal partitioning steps before the vertical ones reduces the number of possible
neighbors that a submatrix may have, and hence simplifies further analysis to be performed by
the compiler and runtime system. When combined with the CRS representation for the local
segments, the MRD distribution produces the MRD-CRS distributed sparse representation. This
can be inmediately generalized to other storage formats; however, since we only use CRS here to
illustrate our ideas, we refer to MRD-CRS as MRD itself.
3.2 BRS Distributed Sparse Representation
The second strategy is based on a cyclic distribution (see Figure 4.a). This does not retain
locality of access; as in the regular case, it is suitable where the workload is not spread evenly across
the matrix nor presents periodicity, or when the density of the matrix varies over time. Many
common algorithms are of this nature, including sparse matrix decompositions (LU, Cholesky,
QR, WZ) and some image reconstruction algorithms.
In this section, we assume both dimensions of A 0 to be distributed cyclically with block length
1 (see
Figure
4.b). Several variants for the representation of the distribution segment in this
context are described in the literature, including the MM, ESS and BBS methods [1]. Here we
consider a CRS sparse format, which results in the BRS (Block Row Scatter) distributed sparse
representation. A very similar distributed representation is that of BCS (Block Column Scatter)
[26], where the sparse format is compressed by columns. just changing rows by columns and vice
versa.
The mapping which is established by the BRS choice requires complex auxiliary structures
and translation schemes within the compiler. However, if such data are used together with
cyclically-distributed dense arrays, then the structures are properly aligned, leading to savings in
communication.
Extensions for the Support of Sparse Matrix Computation
4.1 Language Considerations
This section proposes new language features for the specification of sparse data in a data parallel
language. Clearly, block and cyclic distributions as offered in HPF-1 are not adequate for this
purpose; on the other hand, INDIRECT distributions [15, 28], which have been included in the approved
extensions of HPF-2, do not allow the specification of the structure inherent in distributed
sparse representations, and thus introduce unnecessary complexity in memory consumption and
execution time. Our proposal makes this structure explicit by appropriate new language elements,
which can be seen as providing a special syntax for an important special case of a user-defined
distribution function as defined in Vienna Fortran or HPF+ [11, 12].
The new language features provide the following information to the compiler and the runtime
system:
ffl The name, index domain, and element type of the sparse matrix are declared. This is done
using regular Fortran declaration syntax. This array will not actually appear in the original
code, since it is represented by a set of arrays, but the name introduced here is referred to
when specifying the distribution.
ffl An annotation is specified which declares the array as being SPARSE and provides information
on the representation of the array. This includes the names of the auxiliary vectors in the
order data, column and row, which are not declared explicitly in the program. Their sizes
are determined implicitly from the matrix index domain.
ffl The DYNAMIC attribute is used in a manner analogous to its meaning in Vienna Fortran
and HPF: if it is specified, then the distributed sparse representation will be determined
dynamically, as a result of executing a DISTRIBUTE statement. Otherwise, all components
of the distributed sparse representation can be constructed at the time the declaration is
processed. Often, this information will be contained in a file whose name will be indicated
in this annotation.
In addition, when the input sparse matrix is not available at compile-time, it must be read
from a file in some standard format and distributed at runtime. The name of this file may be
provided to the compiler in an additional directive.
Concrete examples for typical sparse codes illustrating details of the syntax (as well as its HPF
are given in Figures 5 and 6.
4.2 Solution of a Sparse Linear System
There is a wide range of techniques to solve linear systems. Among them, iterative methods
use successive approximations to obtain more accurated solutions at each step. The Conjugate
Gradient (CG) [3] is the oldest, best known, and most effective of the nonstationary iterative
methods for symmetric positive definite systems. The convergence process can be speeded up by
using a preconditionator before computing the CG itself.
We include in Figure 5 the data-parallel code for the unpreconditioned CG algorithm, which
involves one matrix-vector product, three vector updates, and two inner products per iteration.
The input is the coefficient matrix, A, and the vector of scalars B; also, an initial estimation must
be computed for Xvec, the solution vector. With all these elements, the initial residuals, R, are
defined. Then, in every iteration, two inner products are performed in order to update scalars
that are defined to make the sequences fulfill certain orthogonality conditions; at the end of each
iteration, both solution and residual vectors are updated.
4.3 Lanczos Algorithm
Figure
6 illustrates an algorithm in extended HPF for the tridiagonalization of a matrix with
the Lanczos method [24]. We use a new directive, indicated by !NSD$, to specify the required
declarative information. The execution of the DISTRIBUTE directive results in the computation of
the distributed sparse representation. After that point, the matrix can be legally accessed in the
program, where several matrix-vector and vector-vector operations are performed to compute the
diagonals of the output matrix.
5 Runtime analysis
Based on the language extensions introduced above, this section shows how access to sparse
data can be efficiently translated from Vienna Fortran or HPF to explicitly parallel message
passing code in the context of the data parallel SPMD paradigm.
In the rest of the paper, we assume that the input matrix is not available at compile-time.
Under such an assumption, the matrix distribution has to be postponed until runtime and this
obviously enforces the global to local index translation to be also performed at runtime.
To parallelize codes that use indirect addressing, compilers typically use an inspector-executor
strategy [22], where each loop accessing to distributed variables is tranformed by inserting an
additional preprocessing loop, called an inspector. The inspector translates the global addresses
accessed by the indirection into a (processor, offset) tuple describing the location of the element,
and computes a communication schedule. The executor stage then uses the preprocessed information
to fetch the non-local elements and to access distributed data using the translated addresses.
An obvious penalty of using the inspector-executor paradigm is the runtime overhead introduced
by each inspector stage, which can become significant when multiple levels of indirection are
used to access distributed arrays. As we have seen, this is frequently the case for sparse-matrix
algorithms using compact storage formats such as CRS. For example, the Xvec(DA(RO(i)+jj)
reference encountered in Figure 5 requires three preprocessing steps - one to access the distributed
array RO , a second to access DA , and yet a third to access Xvec . We pay special attention to
this issue in this section and outline an efficient solution for its parallelization.
5.1 The SAR approach
Though they are based on the inspector-executor paradigm, our solution for translating CRS-
like sparse indices at runtime within data-parallel compilers significantly reduces both time and
memory overhead compared to the standard and general-purpose CHAOS library [23].
This technique, that we have called "Sparse Array Rolling" (SAR), encapsulates into a small
descriptor information of how the input matrix is distributed across the processors. This allows
us to determine the (processor, offset) location of a sparse matrix element without having to
plod through the distributed auxiliary array data-structures, thus saving the preprocessing time
required by all the intermediate arrays.
Figure
7 provides an overview of the SAR solution approach. The distribution of the matrix
represented in CRS format is carried out by a partitioner, the routine responsible for computing
the domain decomposition giving as output the distributed representation as well as its associated
descriptor. This descriptor can be indexed through the translation process using the row number
the non-zero index ( X ) to locate the processor and offset at which the matrix element is
stored. When the element is found to be non-local, the dereference process assigns an address in
local memory where the element is placed once fetched. The executor stage uses the preprocessed
information inside a couple of gather/scatter routines which fetch the marked non-local elements
and place them in their assigned locations. Finally, the loop computation accesses the distributed
data using the translated addresses.
The efficiency of the translation function and the memory overheads of the descriptor are largely
dependent on how the matrix is distributed. The following sections provide these details for each
of the distributions studied in this paper.
5.2 MRD descriptor and translation
The MRD distribution maps a rectangular portion of the dense index space (n \Theta m) onto
a virtual processor space (X \Theta Y ). Its corresponding descriptor is replicated on each of the
processors and consists of two parts: A vector partH stores the row numbers at which the X
horizontal partitions are made and a two dimensional array partV , of size n \Theta Y , which keeps
track of the number of non-zero elements in each vertical partition for each row.
Example 1 For the MRD distributed matrix in Figure 3, the corresponding descriptor replicated
among the processors is the following:
partH(1)=8 denotes that the horizontal partition is made at row 8. Each row has two vertical
partitions. The values of partV(9,1:2)= 2,3 say that the first section of row 9 has two non-zero
elements while the second section has one (3
We assume partH(0)=1, partH(X)=N+1, partV(k,0)=0 and
for all 1 - k - N .
Given any non-zero element identified by (i,jj) we can perform a translation by means of its
descriptor to determine the processor that owns the non-zero element. Assuming that processors
are identified by their position (myR; myC) in the X \Theta Y virtual processor mesh, the values myR
and myC of the processor that owns the element satisfies the following inequalities.
Searching for the right myR and myC that satisfies these inequalities can require a search
space of size X \Theta Y . The search is optimized by first checking to see if the element is local by
plugging in the local processor's values for myR and myC. Assuming a high degree of locality, this
check frequently succeeds immediately. When it fails, a binary search mechanism is employed.
The offset at which the element is located is (Xvec-partV(i,myC) . Thus the column number of
the element (i,jj) can be found at CO((Xvec-partV(i,myC)) on processor (myR; myC), and the
non-zero value can be accessed from DA((Xvec-partV(i,myC)) on the same processor, without
requiring any communication or additional preprocessing steps.
5.3 BRS descriptor and translation
Unlike MRD, the BRS descriptor is different on each processor. Each processor (myR,myC)
has elements from n=X rows mapped onto it. The BRS descriptor stores for each local row of the
matrix, an entry for every non-zero element on that row, regardless of the whether that element is
mapped locally or not. For those elements that are local, the entry stores the local index into DA .
For non-local elements, the entry stores the global column number of that element in the original
matrix. To distinguish between the local entries and non-local entries, we swap the sign of local
indices so that they become negative. The actual data-structure used is a CRS-like two-vector
representation - a vector called CS stores the entries of all the elements that are mapped to local
rows, while another vector, RA , stores the indices at which each row starts in CS .
Example 2 For the sparse matrix A and its partitioning showed in Figure 4, the values of CS
and RA on processor (0,0) are the following:
CS(1)=2 says that the element (53) is stored on global column 2, and is non-local. CS(2)=-1
signifies that the element (19) is mapped locally and is stored at local index 1. The remaining
entries have similar interpretations.
The processor owning the element R,X is identified as follows. First, the local row is identified
using the simple formula X). The entry for the element is obtained using
CS(RA(r)+jj) . If M is negative, then it implies that the element is local and can be accessed at
DA(-M) . If it is positive, then we have the global row i and column number M of the element. This
implies that the processor owning the element is We save the [i,jj]
indices in a list of indices that are marked for later retrieval from processor Q. During the executor,
a Gather routine will send these [i,jj] indices to Q, where a similar translation process is repeated;
this time, however, the element will be locally found and sent to the requesting processor.
6 Compilation
This section describes the compiler implementation within the Vienna Fortran Compilation
System (VFCS). The input to the compiler is a Vienna-Fortran code extended with the sparse
annotations described in Section 4. The compilation process results in a Fortran 77 code enhanced
with message-passing routines as well as the runtime support already discussed in the previous
section.
The tool was structured in a set of modules such as shown in Figure 8. We now describe each
module separately.
6.1 Front-End
The first module is the only part of the tool which interacts with the declaration part of the
program. It is responsible for:
1. The scanning and parsing of the new language elements presented in Section 4. These
operations generate the abstract syntax tree for such annotations and a table summarizing
all the compile-time information extracted from them. Once this table is built, the sparse
directives are not needed any more and the compiler proceeds to remove them from the
code.
2. The insertion of declarations for the local vectors and the auxiliary variables that the target
code and runtime support utilize.
6.2 Parallelizer
At this stage, the compiler first scans the code searching for the sparse references and extracting
all the information available at compile-time (i.e., indirections, syntax of the indices, loops and
conditionals inside of which the reference is, etcetera). All this information is then organized in
a database for its later lookup through the parallelization process.
Once this is done, the loop decomposition starts. The goal here consists of distributing the
workload of the source code as evenly as possible among the processors. This task turns out to be
particularly complex for a compiler when handling sparse codes, mainly because of the frequent
use of indirections when accessing the sparse data and the frequent use of sparse references in
loop bounds.
In such cases, multiple queries to distributed sparse data are required by all processors in
order to determine their own iteration space, leading to a large number of communications. To
overcome this problem, we address the problem in a different way: Rather than trying to access
the actual sparse values requested from the loop headers, we apply loop transformations that
not only determine the local iteration space but also map such values into semantically equivalent
information in the local distribution descriptor. This approach has the double advantage of reusing
the compiler auxiliary structures while ensuring the locality of all the accesses performed in the
loop boundaries. The result is a much faster mechanism for accessing data at no extra memory
overhead.
For the MRD case, for example, arrays partH and partV determine the local region for the
data in a sparse matrix based on global coordinates. In this way, the loop partitioning can be
driven with very similar strategies to those of BLOCK, with the only difference of the regions
having a different size (but similar workload) which is determined at runtime when the descriptor
is generated from the runtime support.
For the BRS case the solution is not that straightforward. Let us take as example the Conjugate
Gradient (CG) algorithm in Figure 5, where the dense vectors are distributed by dense CYCLIC
and the sparse matrix follows a BRS scheme. Note that most of the CG loops only refer to dense
structures. Its decomposition can be performed just enforcing the stride of the loops to be the
number of processors on which the data dimension traversed by the loop is distributed. This is
because consecutive local data in CYCLIC are always separated by a constant distance in terms of
the global coordinates. However, when references to sparse vectors are included in the loops, this
fact is only true for the first matrix dimension; for second one, the actual sparsity degree of the
matrix determines the distance of consecutive data in terms of their global columns. Since this
becomes unpredictable at compile-time (recall our assumption of not having the sparse matrix
pattern available until runtime), a runtime checking defined as a function of the BRS distribution
descriptor needs to be inserted for loops traversing the second matrix dimension to be successfully
parallelized. This checking can be eventually moved to the inspector phase when the executor
is computed through a number of iterations, thus decreasing the overall runtime overhead (see
transformation in the final code generation, Figure 10).
Figure
9 provides a code excerpt that outlines the loop decomposition performed within the
VFCS for the two sparse loops in Figure 5. RA and CS are the vectors for the BRS descriptor on
processor with coordinates (myR, myC). RA stores indices in the very same way than the local
RO does, but considering all the elements placed in global rows i \Theta X + myR for any given local
row i. A CYCLIC-like approach is followed to extract the local iterations from the first loop and
then RA traverses all the elements in the second loop and CS delimits its local iterations in a
subsequent IF.
Note the different criteria followed for parallelizing both loops. In the first loop, the well-known
owner's compute rule is applied. In the second loop, though, the underlying idea is to avoid the
replication of the computation by first calculating a local partial sum given by the local elements
and then accumulate all the values in a single reduction phase. In this way, computations are
distributed based on the owner of every single DA and P value for a given index K , which makes
them match always on the same processor. This achieves a complete locality
6.3 Back-End
Once the workload has been assigned to each processor, the compiler enters in its last stage,
whose output is the target SPMD code. To reach this goal, the code has to be transformed into
inspector and executor phases for each of its loops.
Figure
shows the final SPMD code for the sparse loops parallelized in Figure 9. Overall, the
next sequence of steps are carried out in this compiler module:
1. An inspector loop is inserted prior to each loop computation. The header for this loop is
obtained through the syntax tree after the parallelization and statements inside the loop are
generated to collect the indices to distributed arrays into auxiliary vectors. These vectors
are then taken as input to the translation process.
2. Calls to the translate , dereference and scatter/gather routines are placed between the
inspector and executor loops to complete the runtime job.
3. References to distributed variables in the executor loop are sintactically changed to be
indexed by the translation functions produced as output in the inspector (see functions f
and g in Figure 10).
4. Some additional I/O routines must be inserted at the beginning of the execution part to
merge on each processor the local data and descriptors. In our SAR scheme, this is done by
the partitioner routine.
7 Evaluation of Distribution Methods
The choice of distribution strategy for the matrix is crucial in determining performance. It
controls the data locality and load balance of the executor, the preprocessing costs of the inspector,
and the memory overhead of the runtime support. In this section we discuss how BRS and MRD
distributions affect each of these aspects for the particular case of the sparse loops in the Conjugate
Gradient algorithm. To account for the effects of different sparsity structures we chose two very
different matrices coming from the Harwell-Boeing collection [14], where they are identified as
PSMIGR1 and BCSSTK29. The former contains population migration data and is relatively
dense, whereas the latter is a very sparse matrix used in large eigenvalue problems. Matrix
characteristics are summarized in Table 1.
7.1 Communication Volume in Executor
Table
2 shows the communication volume in executor for 16 processors in a 4 \Theta 4 processors
mesh when computing the sparse loops of the CG algorithm. This communication is necessary for
accumulating the local partial products in the array Q . Such an operation has been implemented
like a typical reduction operation for all the local matrix rows over each of the processor rows
We note two things: first, the relation between communication volume and the processor mesh
configuration and second, the balance in the communication pattern (note that comparisons of
communication volumes across the two matrices should be relative to their number of rows).
In general, for a X \Theta Y processor mesh and a n \Theta m sparse matrix , the communication volume
is roughly proportional to (n=X) \Theta log(Y ). Thus a 8 \Theta 2 processor mesh will have 4 times less
total communication volume than a 4 \Theta 4 mesh. For BRS, each processor accumulates exactly
the same amount of data, while for MRD, there are minor imbalances stemming from the slightly
different sizes of the horizontal partitions (see Figure 11). Communication time in executor is
showed in black in Figure 13.
7.2 Loop Partitioning and Workload Balance
As explained in section 6.2, each iteration of the sparse loops in the Conjugate Gradient algorithm
is mapped to the owner of the DA element accessed in that iteration. This results in perfect
workload balance for the MRD case, since each processor owns an equal number of non-zeros.
BRS workload balance relies on the random positioning of the elements, and except for pathological
cases, it too results in very good load balance. Table 3 shows the Load Balance Index for
BRS (maximum variation from average divided by its average).
7.3 Memory Overhead
Vectors for storing the local submatrix on each processor require similar amounts of memory in
both distributions. However, the distribution descriptor used by the runtime support can require
substantially different amounts of memory. Table 4 summarizes these requirements. The first row
indicates the expected memory overhead and the next two rows show the actual overhead in terms
of the number of integers required. The "overhead" column represents the memory overhead as
a percentage of the amount of memory required to store the local submatrix.
Vectors partV and CS are responsible of most overhead of its distribution, since they keep
track of the positions of the non-zero elements in the MRD and BRS respectively. This overhead
is much higher for BRS because the CS vector stores the column numbers even for some of the
off-processor non-zeros. The length of this vector can be reduced by using processor meshes with
8 Runtime evaluation
This section describes our performance evaluation of the sparse loops of the Conjugate Gradient
algorithm when parallelized using the VFCS compiler under the BRS and MRD especifications.
Our intent was to study the effect of the distribution choice on inspector and executor performance
within a data-parallel compiler.
Finally, a manual version of the application was used as a baseline to determine the overhead
of a semi-automatic parallelization.
Our platform was an Intel Paragon using the NXLIB communication library. In our experi-
ments, we do not account for the I/O time to read in the matrix and perform its distribution.
8.1 Inspector Cost
Figure
12 shows the preprocessing costs for the sparse loops of the MRD and BRS versions of
the CG algorithm on the two matrices. The preprocessing overheads do reduce with increasing
parallelism, though the efficiencies drop at the high end. We also note that while BRS incurs
higher preprocessing overheads than MRD, it also scales better.
To understand the relative costs of BRS relative to MRD, recall that the BRS translation mechanism
involves preprocessing all non-zeros in local rows, while the MRD dereferencing requires a
binary search through the distribution descriptor only for the local non-zeros. Though it processes
fewer elements the size of the MRD search space is proportional to the size of the processor mesh,
so as processors are added, each translation requires a search over a larger space. Though it is
not shown in the table, our measurements indicate that the BRS inspector is actually faster than
MRD for more than 64 processors.
8.2 Executor Time
Since both schemes distribute the nonzeros equally across processors we found that the computational
section of the executor scaled very well for both distributions until 32 processors, after
which the communication overheads start to reduce efficiency. Figure 13 which shows the executor
time for the sparse loops of the two CG versions indicates good load balance. In fact, we find
some cases of super-linear speedup, attributable to cache effects.
The executor communication time is shown in dark in Figure 13. The BRS communication
overhead remains essentially invariant across all processor sizes. This suggests that the overhead
of the extra communication startups is offset by the reduced communication volume, maintaining
the same total overhead. For MRD, the communication is much more unbalanced and this leads
to much poorer scaling of the communication costs. Indeed, this effect is particularly apparent for
BCSSTK29, where the redistribution is extremely unbalanced and becomes a severe bottleneck
as the processor size is increased.
8.3 Comparison to Manual Parallelization
The efficiency of a sparse code parallelized within the VFCS compiler depends largely on
primary factors:
ffl The distribution scheme selected for the parallelization, either MRD or BRS.
ffl The sparsity rate of the input matrix.
ffl The cost of the inspector phase to figure out the access pattern.
On the other hand, we have seen that the parallelization of the sparse loops of the CG algorithm
within the VFCS leads to a target code in which the executor does not perform any communication
in the gather/scatter routines as a consequence of the full locality achieved by the data
distribution, its local representation and the loop partitioning strategy. Apart from the actual
computation, the executor only contains the communication for accumulating the local partial
products, which is implemented in a reduction routine exactly as a programmer would do. Thus,
the executor time becomes an accurated estimation of the efficiency that a smart programmer can
attain and the additional cost of using an automatic compilation lies intirely in the preprocessing
time (inspector loops plus subsequents runtime calls in Figure 10).
Figure
14 tries to explain the impact of the major factors that influence the parallel efficiency
while providing a comparison between the manual and the compiler-driven parallelization. Execution
times for the compiler include the cost for a single inspector plus an executor per iteration,
whereas for the manual version no inspector is required.
As far as the distribution itself is concerned, Figure 14 shows that the BRS introduces a bigger
overhead. This is a direct consequence of its more expensive inspector because of the slower
global to local translation process. However, even in the BRS case, our overall results are quite
efficient through a number of iterations: In practice, the convergence in the CG algorithm starts
to exhibit a stationary behaviour after no less than one hundred iterations. By that time, the
inspector cost has already been widely amortized and the total compiler overhead is always kept
under 10% regardless of the input matrix, the distribution chosen and the number of processors
in the parallel machine.
With respect to the matrix sparsity, we can conclude that the higher the degree of sparsity in
a matrix is, the better is the result produced by a compiler if compared to the manual version.
The overall comparison against a manual parallelization also reflects the good scalability of the
manual gain for a small number of iterations.
Summarizing, we can say that the cost to be paid for an automatic parallelization is worthwhile
as long as the algorithm can amortize the inspector costs through a minimum number of iterations.
The remaining cost of the Conjugate Gradient algorithm lies in the multiple loops dealing with
dense arrays distributed by CYCLIC. However, the computational weight of this part never goes
over 10% of the total execution time. Even though the compiler efficiency is expected to be
improved for such cases, its influence is minimum and does not lead to a significant variation in
the full algorithm.
Additional experiments to demonstrate the efficiency of our schemes have been developed by
Trenas [24], who implemented a manual version of the Lanczos algorithm (see Figure 6) using
PVM routines and the BRS scheme.
9 Related Work
Programs designed to carry out a range of sparse algorithms in matrix algebra are outlined in
[3]. All these codes require the optimizations described in this paper if efficient target code is to
be generated for a parallel system.
There are a variety of languages and compilers targeted at distributed memory multiprocessors
([28, 9, 15, 18]). Some of them do not attempt to deal with loops that arise in sparse or irregular
computation. One approach, originating from Fortran D and Vienna Fortran, is based on INDIRECT
data distributions and cannot express the structure of sparse data, resulting in memory and
runtime overhead. The scheme proposed in this paper provides special syntax for a special class
of user-defined data distributions, as proposed in Vienna Fortran and HPF+ [12].
On the other hand, in the area of the automatic parallelization, the most outstanding tools we
know (Parafrase [20], Polaris [6]) are not intended to be a framework for the parallelization of
sparse algorithms such as those addressed in our present work.
The methods proposed by Saltz et al. for handling irregular problems consists in endowing the
compiler with a runtime library [23] to facilitate the search and capture of data located in the
distributed memory. The major drawback of this approach is the large number of messages that
are generated as a consequence of accessing a distributed data addressing table, and its associated
overhead of memory [17].
In order to enable the compiler to apply more optimizations and simplify the task of the
programmer, Bik and Wijshoff [5] have implemented a restructuring compiler which automatically
converts programs operating on dense matrices into sparse code. This method postpones the
selection of a data structure until the compilation phase. Though more friendly to the end user,
this approach has the risk of inefficiencies that can result from not allowing the programmer to
choose the most appropriate sparse structures.
Our way of dealing with this problem is very different: We define heuristics that perform an
efficient mapping of the data and a language extension to describe the mapping in data parallel
languages [18, 28]. We have produced and benchmarked a prototype compiler, integrated into the
VFCS, that is able to generate efficient code for irregular kernels. Compiler transformations insert
procedures to perform the runtime optimizations. The implementation is qualitatively different
from the efforts cited above in a number of important respects, in particular with respect to its use
of a new data type (sparse format) and data distributions (our distributed sparse representations)
for irregular computation. The basic ideas in these distributions take into account the way in
which sparse data are accessed and map the data in a pseudoregular way so that the compiler may
perform a number of optimizations for sparse codes. More specifically, the pseudoregularity of our
distributions allows us to describe the domain decomposition using a small descriptor which can,
in addition, be accessed locally. This saves most of the memory overhead of distributed tables as
well as the communication cost needed for its lookup.
In general, application codes in irregular problems normally have code segments and loops with
more complex access functions. The most advanced analysis technique, known as slicing analysis
[13], deal with multiple levels of indirection by transforming code that contains such references
to code that contains only a single level of indirection. However, the multiple communication
phases still remain. The SAR technique implemented inside the sparse compiler is novel because
it is able to handle multiple levels of indirection at the cost of a single translation. The key for
attaining this goal consists of taking advantage of the compile-time information about the the
semantic relations between the elements involved in the indirect accesses.
Conclusions
In this paper, sparse data distributions and specific language extensions have been proposed
for data-parallel languages such as Vienna Fortran or HPF to improve their handling of sparse
irregular computation. These features enable the translation of codes which use typical sparse
coding techniques, without any necessity for rewriting. We show in some detail how such code
may be translated so that the resulting code retains significant features of sequential sparse
applications. In particular, the savings in memory and computation which are typical for these
techniques are retained and can lead to high efficiency at run time. The data distributions have
been designed to retain data locality when appropriate, support a good load balance, and avoid
memory wastage.
The compile time and run time support translates these into structures which permit a sparse
representation of data on the processors of a parallel system.
The language extensions required are minimal, yet sufficient to provide the compiler with the
additional information needed for translation and optimization. A number of typical code kernels
have been shown in this paper and in [26] to demonstrate the limited amount of effort required
to port a sequential code of this kind into an extended HPF or Vienna Fortran.
Our results demonstrate that the data distributions and language features proposed here supply
enough information to store and access the data in distributed memory, as well as to perform the
compiler optimizations which bring great savings in terms of memory and communication
overhead.
Low-level support for sparse problems has been described, proposing the implementation of an
optimizing compiler that performs these translations. This compiler improves the functionality
of data-parallel languages in irregular computations, overcoming a major weakness in this field.
Runtime techniques are used in the context of inspector-executor paradigm. However, our set
of low-level primitives differ from those used in several existing implementations in order to take
advantage of the additional semantic information available in our approach. In particular, our
runtime analysis is able to translate multiple indirect array accesses in a single phase and does
not make use of expensive translation tables.
The final result is an optimizing compiler able to generate efficient parallel code for these
computations, very close to what can be expected from a manual parallelization and much faster
in comparison to existing tools in this area.
--R
The Scheduling of Sparse Matrix-Vector Multiplication on a Massively Parallel DAP Computer
A Partitioning Strategy for Nonuniform Problems on Multiprocessors
Automatic Data Structure Selection and Transformation for Sparse Matrix Computations
Massively Parallel Methods for Engineering and Science Problems
Vienna Fortran Compilation System.
Programming in Vienna Fortran.
User Defined Mappings in Vienna For- tran
Extending HPF For Advanced Data Parallel Applications.
Index Array Flattening Through Program Transformations.
Users' Guide for the Harwell-Boeing Sparse Matrix Collection
Fortran D language specification
Computer Solution of Large Sparse Positive Definite Sys- tems
High Performance Language Specification.
Numerical experiences with partitioning of unstructured meshes
The Structure of Parafrase-2: an Advanced Parallelizing Compiler for C and Fortran
Data distributions for sparse matrix vector multiplication solvers
Parallel Algorithms for Eigenvalues Computation with Sparse Matrices
Efficient Resolution of Sparse Indirections in Data-Parallel Compilers
Evaluation of parallelization techniques for sparse applications
Vienna Fortran - A language Specification Version 1.1
--TR
--CTR
Chun-Yuan Lin , Yeh-Ching Chung , Jen-Shiuh Liu, Efficient Data Compression Methods for Multidimensional Sparse Array Operations Based on the EKMR Scheme, IEEE Transactions on Computers, v.52 n.12, p.1640-1646, December
Rong-Guey Chang , Tyng-Ruey Chuang , Jenq Kuen Lee, Efficient support of parallel sparse computation for array intrinsic functions of Fortran 90, Proceedings of the 12th international conference on Supercomputing, p.45-52, July 1998, Melbourne, Australia
Gerardo Bandera , Manuel Ujaldn , Emilio L. Zapata, Compile and Run-Time Support for the Parallelization of Sparse Matrix Updating Algorithms, The Journal of Supercomputing, v.17 n.3, p.263-276, Nov. 2000
Manuel Ujaldon , Emilio L. Zapata, Efficient resolution of sparse indirections in data-parallel compilers, Proceedings of the 9th international conference on Supercomputing, p.117-126, July 03-07, 1995, Barcelona, Spain
Roxane Adle , Marc Aiguier , Franck Delaplace, Toward an automatic parallelization of sparse matrix computations, Journal of Parallel and Distributed Computing, v.65 n.3, p.313-330, March 2005
V. Blanco , P. Gonzlez , J. C. Cabaleiro , D. B. Heras , T. F. Pena , J. J. Pombo , F. F. Rivera, Performance Prediction for Parallel Iterative Solvers, The Journal of Supercomputing, v.28 n.2, p.177-191, May 2004
Chun-Yuan Lin , Yeh-Ching Chung, Data distribution schemes of sparse arrays on distributed memory multicomputers, The Journal of Supercomputing, v.41 n.1, p.63-87, July 2007
Bradford L. Chamberlain , Lawrence Snyder, Array language support for parallel sparse computation, Proceedings of the 15th international conference on Supercomputing, p.133-145, June 2001, Sorrento, Italy
Chun-Yuan Lin , Yeh-Ching Chung , Jen-Shiuh Liu, Efficient Data Distribution Schemes for EKMR-Based Sparse Arrays on Distributed Memory Multicomputers, The Journal of Supercomputing, v.34 n.3, p.291-313, December 2005
M. Garz , I. Garca, Approaches Based on Permutations for Partitioning Sparse Matrices on Multiprocessors, The Journal of Supercomputing, v.34 n.1, p.41-61, October 2005
Thomas L. Sterling , Hans P. Zima, Gilgamesh: a multithreaded processor-in-memory architecture for petaflops computing, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-23, November 16, 2002, Baltimore, Maryland
Ali Pinar , Cevdet Aykanat, Fast optimal load balancing algorithms for 1D partitioning, Journal of Parallel and Distributed Computing, v.64 n.8, p.974-996, August 2004
Bradford L. Chamberlain , Steven J. Deitz , Lawrence Snyder, A comparative study of the NAS MG benchmark across parallel languages and architectures, Proceedings of the 2000 ACM/IEEE conference on Supercomputing (CDROM), p.46-es, November 04-10, 2000, Dallas, Texas, United States
Ken Kennedy , Charles Koelbel , Hans Zima, The rise and fall of High Performance Fortran: an historical object lesson, Proceedings of the third ACM SIGPLAN conference on History of programming languages, p.7-1-7-22, June 09-10, 2007, San Diego, California | sparse computation;data-parallel language and compiler;distributed-memory machines;runtime support |
271013 | Structuring Communication Software for Quality-of-Service Guarantees. | AbstractA growing number of real-time applications require quality-of-service (QoS) guarantees from the underlying communication subsystem. The communication subsystem (host and network) must support real-time communication services to provide the required QoS of these applications. In this paper, we propose architectural mechanisms for structuring host communication software to provide QoS guarantees. In particular, we present and evaluate a QoS-sensitive communication subsystem architecture for end hosts that provides real-time communication support for generic network hardware. This architecture provides services for managing communication resources for guaranteed-QoS (real-time) connections, such as admission control, traffic enforcement, buffer management, and CPU and link scheduling. The design of the architecture is based on three key goals: maintenance of QoS-guarantees on a per-connection basis, overload protection between established connections, and fairness in delivered performance to best-effort traffic.Using this architecture we implement real-time channels, a paradigm for real-time communication services in packet-switched networks. The proposed architecture features a process-per-channel model that associates a channel handler with each established channel. The model employed for handler execution is one of "cooperative" preemption, where an executing handler yields the CPU to a waiting higher-priority handler at well-defined preemption points. The architecture provides several configurable policies for protocol processing and overload protection. We present extensions to the admission control procedure for real-time channels to account for cooperative preemption and overlap between protocol processing and link transmission at a sending host. We evaluate the implementation to demonstrate the efficacy with which the architecture maintains QoS guarantees on outgoing traffic while adhering to the stated design goals. The evaluation also demonstrates the need for specific features and policies provided in the architecture. In subsequent work, we have refined this architecture and used it to realize a full-fledged guaranteed-QoS communication service that performs QoS-sensitive resource management for outgoing as well as incoming traffic. | Introduction
Distributed multimedia applications (e.g., video conferenc-
ing, video-on-demand, digital libraries) and distributed real-time
command/control systems require certain quality-of-
service (QoS) guarantees from the underlyingnetwork. QoS
guarantees may be specified in terms of parameters such
as the end-to-end delay, delay jitter, and bandwidth delivered
on each connection; additional requirements regarding
packet loss and in-order delivery can also be specified.
To support these applications, the communication subsys-
The work reported in this paper was supported in part by the National Science
Foundation under grant MIP-9203895 and the Office of Naval Re-search
under grants N00014-94-1-0229. Any opinions, findings, and conclusions
or recommendations expressed in this paper are those of the authors
and do not necessarily reflect the views of NSF or ONR.
tem in end hosts and the network must be designed to provide
per-connection QoS guarantees. Assuming that the net-work
provides appropriate support to establish and maintain
guaranteed-QoS connections, we focus on the design of the
host communication subsystem to maintain QoS guarantees.
Protocol processing for large data transfers, common
in multimedia applications, can be quite expensive. Resource
management policies geared towards statistical fairness
and/or time-sharing can introduce excessive interference
between different connections, thus degrading the delivered
QoS on individual connections. Since the local delay
bound at a node may be fairly tight, the unpredictability
and excessive delays due to interference between different
connections may even result in QoS violations. This performance
degradation can be eliminated by designing the
communication subsystem to provide: (i) maintenance of
QoS guarantees, (ii) overload protection via per-connection
traffic enforcement, and (iii) fairness to best-effort traf-
fic. These requirements together ensure that per-connection
QoS guarantees are maintained as the number of connections
or per-connection traffic load increases.
In this paper, we propose and evaluate a QoS-sensitive
communication subsystem architecture for guaranteed-QoS
connections. Our focus is on the architectural mechanisms
used within the communication subsystem to satisfy the
QoS requirements of all connections, without undue degradation
in performance of best-effort traffic (with no QoS
guarantees). While the proposed architecture is applicable
to other proposals for guaranteed-QoS connections [3], we
focus on real-time channels, a paradigm for guaranteed-QoS
communication services in packet-switched networks [16].
The architecture features a process-per-channel model
for protocol processing, coordinated by a unique channel
handler created on successful channel establishment.
While the service within a channel is FIFO, QoS guarantees
on multiple channels are provided via appropriate
CPU scheduling of channel handlers and link scheduling of
packet transmissions. Traffic isolation between channels is
facilitated via per-channel traffic enforcement and interaction
between the CPU and link schedulers.
APPLICATION PROGRAMMING INTERFACE
ADMISSION
CONTROL
CONNECTION
API ENTRY/EXIT
PACKET TRANSMISSION
ADAPTER
CPU SCHEDULER
signal
packet transmission completion
(interrupt)
(device driver operations)
notification
packet transmission
initiation of
(packet transmissions)
packets
(fragmentation, encapsulation)
data
(messages) feedback
(buffering, queueing)
suspend/yield
resume
(protocol processing)
(a) Overall architecture (b) Protocol processing
Figure
1. Desired software architecture.
We have implemented this architecture using a modified
x-kernel 3.1 [14] communication executive exercising complete
control over a Motorola 68040 CPU. This configuration
avoids any interference from computation or other operating
system activities on the host, allowing us to focus on
the communication subsystem. We evaluate the implementation
under different traffic loads, and demonstrate the efficacy
with which it maintains QoS guarantees on real-time
channels and provides fair performance for best-effort traf-
fic, even in the presence of ill-behaved real-time channels.
For end-to-end guarantees, resource management within
the communication subsystem must be integrated with that
for applications. The proposed architecture is directly applicable
if a portion of the host processing capacity can be
reserved for communication-related activities [21, 17]. The
proposed architectural extensions can be realized as a server
with appropriate capacity reserves and/or execution priority.
Our implementation is indeed such a server executing in a
standalone configuration. More importantly, our approach
decouples protocol processing priority from that of the ap-
plication. We believe that the protocol processing priority of
a connection must be derived from the QoS requirements,
traffic characteristics, and run-time communication behavior
of the application on that connection. Integration of the
proposed architecture with resource management for applications
will be addressed in a forthcoming paper.
Section 2 discusses architectural requirements for
guaranteed-QoS communication and provides a brief
description of real-time channels. Section 3 presents a
QoS-sensitive communication subsystem architecture
realizing these requirements, and Section 4 describes its
implementation. Section 5 experimentally evaluates the
efficacy of the proposed architecture. Section 6 discusses
related work and Section 7 concludes the paper.
2. Architectural requirements for guaranteed-
QoS communication
For guaranteed-QoS communication [3], we consider unidirectional
data transfer, from source to sink via intermediate
nodes, with data being delivered at the sink in the order in
which it is generated at the source. Corrupted, delayed, or
lost data is of little value; with a continuous flow of time-sensitive
data, there is insufficient time for error recovery.
Thus, we consider data transfer with unreliable-datagram semantics
with no acknowledgements and retransmissions. To
provide per-connection QoS guarantees, host communication
resources must be managed in a QoS-sensitive fashion,
i.e., according to the relative importance of the connections
requesting service. Host communication resources include
CPU bandwidth for protocol processing, link bandwidth for
packet transmissions, and buffer space.
Figure
1(a) illustrates a generic software architecture for
guaranteed-QoS communication services at the host. The
components constituting this architecture are as follows.
Application programming interface (API): The API must
export routines to set up and teardown guaranteed-QoS con-
nections, and perform data transfer on these connections.
Signalling and admission control: A signalling protocol
is required to establish/tear down guaranteed-QoS connections
across the communicating hosts, possibly via multiple
network nodes. The communication subsystem must keep
track of communication resources, perform admission control
on new connection requests, and establish connection
state to store connection specific information.
Network transport: Protocols are needed for unidirectional
(reliable and unreliable) data transfers.
Traffic enforcement: This provides overload protection between
established connections by forcing an application to
conform to its traffic specification. This is required at the
session level, and may also be required at the link level.
Link access scheduling and link abstraction: Link band-width
must be managed such that all active connections
receive their promised QoS. This necessitates abstracting
the link in terms of transmission delay and bandwidth, and
scheduling all outgoing packets for network access. The
minimum requirement for provision of QoS guarantees is
that packet transmission time be bounded and predictable.
Assuming support for signalling, we focus on the components
involved in data transfer, namely, traffic enforcement,
protocol processing and link transmission. In particular, we
study architectural mechanisms for structuring host communication
software to provide QoS guarantees.
2.1. QoS-sensitive data transport
In
Figure
1(b), an application presents the API with data
(messages) to be transported on a guaranteed-QoS connec-
tion. The API must allocate buffers for this data and queue it
appropriately. Conformant data (as per the traffic specifica-
tion) is forwarded for protocol processing and transmission.
Maintenance of per-connection QoS guarantees: Protocol
processing involves, at the very least, fragmentation of
application messages, including transport and network layer
encapsulation, into packets with length smaller than a certain
maximum (typically the MTU of the attached network).
Additional computationally intensive services (e.g., coding,
compression, or checksums) may also be performed during
protocol processing. QoS-sensitive allocation of processing
bandwidth necessitates multiplexing the CPU amongst active
connections under control of the CPU scheduler, which
must provide deadline-based or priority-based policies for
scheduling protocol processing on individual connections.
Non-preemptive protocol processing on a connection implies
that the CPU can be reallocated to another connection
only after processing an entire message, resulting in
a coarser temporal grain of multiplexing and making admission
control less effective. More importantly, admission
control must consider the largest possible message size
(maximum number of bytes presented by the application
in one request) across all connections, including best-effort
traffic. While maximum message size for guaranteed-QoS
connections can be derived from attributes such as frame
size for multimedia applications, the same for best-effort
traffic may not be known a priori. Thus, mechanisms to suspend
and resume protocol processing on a connection are
needed. Protocol processing on a connection may also need
to be suspended if it has no available packet buffers.
The packets generated via protocol processing cannot be
directly transmitted on the link as that would result in FIFO
(i.e., QoS-insensitive) consumption of link bandwidth. In-
stead, they are forwarded to the link scheduler, which must
provide QoS-sensitive policies for scheduling packet trans-
missions. The link scheduler selects a packet and initiates
packet transmission on the network adapter. Notification of
packet transmission completion is relayed to the link scheduler
so that another packet can be transmitted. The link
scheduler must signal the CPU scheduler to resume protocol
processing on a connection that was suspended earlier
due to shortage of packet buffers.
Overload protection via per-connection traffic enforce-
ment: As mentioned earlier, only conformant data is forwarded
for protocol processing and transmission. This is
necessary since QoS guarantees are based on a connection's
traffic specification; a connection violating its traffic specification
should not be allowed to consume communication
resources over and above those reserved for it. Traffic
specification violations on one connection should not affect
QoS guarantees on other connections and the performance
delivered to best-effort traffic. Accordingly, the communication
subsystem must police per-connection traffic; in
general, each parameter constituting the traffic specification
(e.g., rate, burst length) must be policed individually. An
important issue is the handling of non-conformant traffic,
which could be buffered (shaped) until it is conformant, provided
with degraded QoS, treated as best-effort traffic, or
dropped altogether. Under certain situations, such as buffer
overflows, it may be necessary to block the application until
buffer space becomes available, although this may interfere
with the timing behavior of the application. The most appropriate
policy, therefore, is application-dependent.
Buffering non-conformant traffic till it becomes conformant
makes protocol processing non-work-conserving since
the CPU idles even when there is work available; the above
discussion corresponds to this option. Alternately, protocol
processing can be work-conserving, with CPU scheduling
mechanisms ensuring QoS-sensitive allocation of CPU
bandwidth to connections. Work-conserving protocol processing
can potentially improve CPU utilization, since the
CPU does not idle when there is work available. While the
unused capacity can be utilized to execute other best-effort
activities (such as background computations), one can also
utilize this CPU bandwidth by processing non-conformant
traffic, if any, assuming there is no pending best-effort traf-
fic. This can free up CPU processing capacity for subsequent
messages. In the absence of best-effort traffic, work-conserving
protocol processing can also improve the average
QoS delivered to individual connections, especially if
link scheduling is work-conserving.
Fairness to best-effort traffic: Best-effort traffic includes
data transported by conventional protocols such as TCP and
UDP, and signalling for guaranteed-QoS connections. It
should not be unduly penalized by non-conformant real-time
traffic, especially under work-conserving processing.
2.2. Real-time channels
Several models have been proposed for guaranteed-QoS
communication in packet-switched networks [3]. While the
architectural mechanisms proposed in this paper are applicable
to most of the proposed models, we focus on real-time
channels [9, 16]. A real-time channel is a simplex, fixed-
route, virtual connection between a source and destination
host, with sequenced messages and associated performance
guarantees on message delivery. It therefore conforms to the
connection semantics mentioned earlier.
Traffic and QoS Specification: Traffic generation on real-time
channels is based on a linear bounded arrival process
[8, 2] characterized by three parameters: maximum
message size (Mmax bytes), maximum message rate (Rmax
messages/second), and maximum burst size (Bmax mes-
sages). The notion of logical arrival time is used to enforce
a minimum separation I
between messages on
a real-time channel. This ensures that a channel does not use
more resources than it reserved at the expense of other chan-
nels. The QoS on a real-time channel is specified as the desired
deterministic, worst-case bound on the end-to-end delay
experienced by a message. See [16] for more details.
Resource Management: Admission control for real-time
channels is provided by Algorithm D order [16], which
uses fixed-priority scheduling for computing the worst-case
delay experienced by a channel at a link. Run-time link
scheduling, on the other hand, is governed by a multi-class
variation of the earliest-deadline-first (EDF) policy.
2.3. Performance related considerations
To provide deterministic QoS guarantees on communica-
tion, all processing costs and overheads involved in managing
and using resources must be accounted for. Processing
costs include the time required to process and transmit
a message, while the overheads include preemption costs
such as context switches and cache misses, costs of accessing
ordered data structures, and handling of network inter-
rupts. It is important to keep the overheads low and predictable
(low variability) so that reasonable worst-case estimates
can be obtained. Further, resource management policies
must maximize the number of connections accepted for
API ENTRY/EXIT
HANDLER RUN QUEUE ASSIGNMENT
CPU
PACKET TRANSMISSION
link scheduler
message queue
handler
API ENTRY/EXIT
HANDLER RUN QUEUE ASSIGNMENT
CPU
PACKET RECEPTION
CHANNEL MESSAGE QUEUE ASSIGNMENT
handler
packet queue
(a) Source host (b) Destination host
Figure
2. Proposed architecture.
service. In addition to processing costs and implementation
overheads, factors that affect admissibility include the relative
bandwidths of the CPU and link and any coupling between
CPU and link bandwidth allocation. In a recent paper
[19], we have studied the extent to which these factors
affect admissibility in the context of real-time channels.
3. A QoS-sensitive communication architecture
In the process-per-message model [23], a process or thread
shepherds a message through the protocol stack. Besides
eliminating extraneous context switches encountered in the
process-per-protocol model [23], it also facilitates protocol
processing to be scheduled according to a variety of policies,
as opposed to the software-interrupt level processing in BSD
Unix. However, the process-per-message model introduces
additional complexity for supporting QoS guarantees.
Creating a distinct thread to handle each message makes
the number of active threads a function of the number of
messages awaiting protocol processing on each channel.
Not only does this consume kernel resources (such as process
control blocks and kernel stacks), but it also increases
scheduling overheads which are typically a function of the
number of runnable threads in dynamic scheduling envi-
ronments. More importantly, with a process-per-message
model, it is relatively harder to maintain channel seman-
tics, provide QoS guarantees, and perform per-channel traffic
policing. For example, bursts on a channel get translated
into "bursts" of processes in the scheduling queues,
making it harder to police ill-behaved channels and ensure
fairness to best-effort traffic. Further, scheduling overhead
becomes unpredictable, making worst-case estimates either
overly conservative or impossible to provide.
Since QoS guarantees are specified on a per-channel ba-
sis, it suffices to have a single thread coordinate access to
resources for all messages on a given channel. We employ
a process-per-channel model, which is a QoS-sensitive
extension of the process-per-connection model [23]. In
the process-per-channel model, protocol processing on each
channel is coordinated by a unique channel handler, a
lightweight thread created on successful establishment of
the channel. With unique per-channel handlers, CPU
scheduling overhead is only a function of the number of
active channels, those with messages waiting to be trans-
ported. Since the number of established channels, and hence
the number of active channels, varies much more slowly
compared to the number of messages outstanding on all
active channels, CPU scheduling overhead is significantly
more predictable. As we discuss later, a process-per-channel
model also facilitates per-channel traffic enforcement. Fur-
ther, since it reduces context switches and scheduling over-
heads, this model is likely to provide good performance to
connection-oriented best-effort traffic.
Figure
2 depicts the key components of the proposed architecture
at the source (transmitting) and destination (re-
ceiving) hosts; only the components involved in data transfer
are shown. Associated with each channel is a message
queue, a FIFO queue of messages to be processed by the
channel handler (at the source) or to be received by the application
(at the destination). Each channel also has associated
with it a packet queue, a FIFO queue of packets waiting to
be transmitted by the link scheduler (at the source) or to be
reassembled by the channel handler (at the destination).
Transmission-side processing: In Figure 2(a), invocation
of message transmission transfers control to the API. After
traffic enforcement (traffic shaping and deadline assign-
ment), the message is enqueued onto the corresponding
channel's message queue for subsequent processing by the
channel handler. Based on channel type, the channel handler
is assigned to one of three CPU run queues for execution
(described in Section 3.1). It executes in an infinite loop, dequeueing
messages from the message queue and performing
protocol processing (including fragmentation). The packets
thus generated are inserted into the channel packet queue
and into one of three (outbound) link packet queues for the
corresponding link, based on channel type and traffic gener-
ation, to be transmitted by the link scheduler.
Reception-side processing: In Figure 2(b), a received
packet is demultiplexed to the corresponding channel's
packet queue, for subsequent processing and reassembly.
As in transmission-side processing, channel handlers are assigned
to one of three CPU run queues for execution, and execute
in an infinite loop, waiting for packets to arrive in the
channel packet queue. Packets in the packet queue are processed
and transferred to the channel's reassembly queue.
Once the last packet of a message arrives, the channel handler
completes message reassembly and inserts the message
into the corresponding message queue, from where the application
retrieves the message via the API's receive routine.
At intermediate nodes, the link scheduler relays arriving
packets to the next node along the route. While we focus on
transmission-side processing at the sending host, the following
discussion also applies to reception-side processing.
Message queue
Message queue semaphore
Packet queue
Packet queue semaphore
deadline/priority
status
type
Proxy: process id
deadline/priority
Traffic specification
QoS specification
Relative channel priority
Local delay bound
Handler: process id
Application requirements
Admission control
Buffer management
Protocol processing
messages from application
channel message queue
message processed completely
channel packet queue
inherit message deadline
if message early
else initiate message processing
initialize block count
if packet buffers available
enqueue packet
else
suspend until buffer available
decrement block count
if block count zero
if yield condition true, yield CPU
else reset block count
continue
else continue
message not processed completely
dequeue message
process packet
enqueue packet
link packet queues
suspend until message current
(a) Channel state (b) Handler execution profile
Figure
3. Channel state and handler profile.
3.1. Salient features
Figure
3(a) illustrates a portion of the state associated with
a channel at the host upon successful establishment. Each
channel is assigned a priority relative to other channels, as
determined by the admission control procedure. The local
delay bound computed during admission control at the host
is used to compute deadlines of individual messages. Each
handler is associated with a type, and execution deadline or
priority, and execution status (runnable, blocked, etc. In
addition, two semaphores are allocated to each channel han-
dler, one to synchronize with message insertions into the
channel's message queue (the message queue semaphore),
and the other to synchronize with availability of buffer space
in the channel's packet queue (the packet queue semaphore).
Channel handlers are broadly classified into two types,
best-effort and real-time. A best-effort handler is one that
processes messages on a best-effort channel. Real-time handlers
are further classified as current real-time and early
real-time. A current real-time handler is one that processes
on-time messages (obeying the channel's rate specification),
while an early real-time handler is one that processes early
messages (violating the channel's rate specification).
Figure
3(b) shows the execution profile of a channel handler
at the source host. The handler executes in an infinite
loop processing messages one at a time. When initialized,
it simply waits for messages to process from the message
queue. Once a message becomes available, the handler dequeues
the message and inherits its deadline. If the message
is early, the handler computes the time until the message
will become current and suspends execution for that dura-
tion. If the message is current, the handler initiates protocol
processing of the message. After creating each packet, the
handler checks for space in the packet queue (via the packet
queue semaphore); it is automatically blocked if space is not
available. The packets created are enqueued onto the chan-
nel's packet queue, and if the queue was previously empty,
the link packet queues are also updated to reflect that this
channel has packets to transmit. Handler execution employs
cooperative preemption, where the currently-executing handler
relinquishes the CPU to a waiting higher-priority handler
after processing a block of packets, as explained below.
While the above suffices for non-work-conserving protocol
processing, a mechanism is needed to continue handler
execution in the case of work-conserving protocol process-
ing. Accordingly, in addition to blocking the handler as be-
fore, a channel proxy is created on behalf of the handler. A
channel proxy is a thread that simply signals the (blocked)
channel handler to resume execution. It competes for CPU
access with other channel proxies in the order of logical arrival
time, and exits immediately if the handler has already
woken up. This ensures that the handler is made runnable if
the proxy obtains access to the CPU before the handler becomes
current. Note that an early handler must still relinquish
the CPU to a waiting handler that is already current.
Maintenance of QoS guarantees: Per-channel QoS guarantees
are provided via appropriate preemptive scheduling
of channel handlers and non-preemptive scheduling
of packet transmissions. While CPU scheduling can be
priority-based (using relative channel priorities), we consider
deadline-based scheduling for channel handlers and
proxies. Execution deadline of a channel handler is inherited
dynamically from the deadline of the message to be pro-
cessed. Execution deadline of a channel proxy is derived
from the logical arrival time of the message to be processed.
Channel handlers are assigned to one of two run queues
based on their type (best-effort or real-time), while channel
proxies (representing early real-time traffic) are assigned to
a separate run queue. The relative priority assignment for
handler run queues is such that on-time real-time traffic gets
the highest protocol processing priority, followed by best-effort
traffic and early real-time traffic in that order.
Provision of QoS guarantees necessitates bounded delays
in obtaining the CPU for protocol processing. As
shown in [19], immediate preemption of an executing lower-priority
handler results in expensive context switches and
cache misses; channel admissibility is significantly improved
if preemption overheads are amortized over the processing
of several packets. The maximum number of packets
processed in a block is a system parameter determined
via experimentation on a given host architecture. Cooperative
preemption provides a reasonable mechanism to bound
CPU access delays while improving utilization, especially if
all handlers execute within a single (kernel) address space.
Link bandwidth is managed via multi-class non-preemptive
EDF scheduling with link packet queues
organized similar to CPU run queues. Link scheduling
is non-work-conserving to avoid stressing resources at
downstream hosts; in general, the link is allowed to "work
ahead" in a limited fashion, as per the link horizon [16].
Overload protection: Per-channel traffic enforcement is
performed when new messages are inserted into the message
queue, and again when packets are inserted into the link
packet queues. The message queue absorbs message bursts
on a channel, preventing violations of Bmax and Rmax
Csw context switch time
Ccm cache miss penalty 90 -s
1st
processing cost 420 -s
processing cost 170 -s
C l per-packet link scheduling cost 160 -s
packets between preemption points 4 pkts
S maximum packet size 4K bytes
Table
1. Important system parameters.
on this channel from interfering with other, well-behaved
channels. During deadline assignment, new messages are
checked for violations in Mmax and Rmax . Before inserting
each message into the message queue, the inter-message
spacing is enforced according to I min . For violations in
Mmax , the (logical) inter-arrival time between messages is
increased in proportion to the extra packets in the message.
The number of packet buffers available to a channel
is determined by the product of the maximum number of
packets constituting a message (derived from Mmax ) and
the maximum allowable burst length Bmax . Under work-conserving
processing, it is possible that the packets generated
by a handler cannot be accommodated in the channel
packet queue because all the packet buffers available
to the channel are exhausted. A similar situation could
arise in non-work-conserving processing with violations of
Mmax . Handlers of such violating channels are prevented
from consuming excess processing and link capacity, either
by blocking their execution or lowering their priority relative
to well-behaved channels. Blocked handlers are subsequently
woken up when the link scheduler indicates availability
of packet buffers. Blocking handlers in this fashion
is also useful in that a slowdown in the service provided to
a channel propagates up to the application via the message
queue. Once the message queue fills up, the application can
be blocked until additional space becomes available. Al-
ternately, messages overflowing the queue can be dropped
and the application informed appropriately. Note that while
scheduling of handlers and packets provides isolation between
traffic on different channels, interaction between the
CPU and link schedulers helps police per-channel traffic.
Fairness: Under non-work-conserving processing, early
real-time traffic does not consume any resources at the expense
of best-effort traffic. With work-conserving process-
ing, best-effort traffic is given processing and transmission
priority over early real-time traffic.
3.2. CPU preemption delays and overheads
The admission control procedure (D order) must account
for CPU preemption overheads, access delays due to cooperative
preemption, and other overheads involved in managing
resources. In addition, it must account for the overlap
between CPU processing and link transmission, and hence
the relative bandwidths of the CPU and link. In a companion
paper [19], we presented extensions to D order to account
for the above-mentioned factors. Table 1 lists the important
system parameters used in the extensions.
3.3. Determination of P, S, and L x
P and S determine the granularity at which the CPU and
link, respectively, are multiplexed between channels, and
thus determine channel admissibility at the host [19]. Selection
of P is governed by the architectural characteristics of
the host CPU (Table 1). For a given host architecture, P is
selected such that channel admissibility is maximized while
delivering reasonable data transfer throughput. S is selected
either using end-to-end transport protocol performance or
host/adapter design characteristics. In general, the latency
and throughput characteristics of the adapter as a function of
packet size can be used to pick a packet size that minimizes
delivering reasonable data transfer throughput.
For a typical network adapter, the transmission time for
a packet of size s, L x (s), depends primarily on the overhead
of initiating transmission and the time to transfer the
packet to the adapter and on the link. The latter is a function
of packet size and the data transfer bandwidth available
between host and adapter memories. Data transfer band-width
itself is determined by host/adapter design features
(pipelining, queueing on the adapter) and the raw link band-
width. If C x is the overhead to initiate transmission on an
adapter feeding a link of bandwidth B l bytes/second, L x
can be approximated as L x
is the data transfer bandwidth available to/from host mem-
ory. B x is determined by factors such as the mode (di-
rect memory access (DMA) or programmed IO) and efficiency
of data transfer, and the degree to which the adapter
pipelines packet transmissions. C x includes the cost of setting
up DMA transfer operations, if any.
4. Implementation
We have implemented the proposed architecture using a
modified x-kernel 3.1 communication executive [14] that
exercises complete control over a 25 MHz Motorola 68040
CPU. CPU bandwidth is consumed only by communication-related
activities, facilitating admission control and resource
management for real-time channels. 1 x-kernel (v3.1) employs
a process-per-message protocol-processingmodel and
a priority-based non-preemptive scheduler with
levels; the CPU is allocated to the highest-priority runnable
thread, while scheduling within a priority level is FIFO.
4.1. Architectural configuration
Real-time communication is accomplished via a
connection-oriented protocol stack in the communication
executive (see Figure 4(a)). The API exports routines
for channel establishment, channel teardown, and data
it also supports routines for best-effort data
transfer. Network transport for signalling is provided by a
1 Implementation of the reception-side architecture is a slight variation
of the transmission-side architecture.
APPLICATIONS
RPC
Network Layer
Link Access Layer
Application Programming Interface
Name Service RTC Signalling & Traffic Enforcement
Synchronization
Clockcurrent best-effort early context switch
between handlers
designated priority level
x-kernel context switch
x-kernel run queues
xkernel process
EDF handler run queues
for active handlers
x-kernel scheduler
(selection of process to run)
(selection of handler to run)
EDF scheduler
(a) x-kernel protocol stack (b) Layered EDF scheduler
Figure
4. Implementation environment.
(resource reservation) protocol layered on top of a remote
procedure call (RPC) protocol derived from x-kernel's
CHAN protocol. Network data transport is provided by a
fragmentation which packetizes large
messages so that communication resources can be multiplexed
between channels on a packet-by-packet basis.
The FRAG transport protocol is a modified, unreliable
version of x-kernel's BLAST protocol with timeout and
data retransmission operations disabled. The protocol
stack also provides protocols for clock synchronization
and network layer encapsulation. The network layer protocol
is connection-oriented and provides network-level
encapsulation for data transport across a point-to-point
communication network. The link access layer provides
link scheduling and includes the network device driver.
More details on the protocol stack are provided in [15].
4.2. Realizing a QoS-sensitive architecture
Process-per-channel model: On successful establishment,
a channel is allocated a channel handler, space for its message
and packet queues, and the message and packet queue
semaphores. If work-conserving protocol processing is de-
sired, a channel proxy is also allocated to the channel. A
channel handler is an x-kernel process (which provides its
thread of control) with additional attributes such as the type
of channel (best-effort or real-time), flags encoding the state
of the handler, its execution priority or deadline, and an
event identifier corresponding to the most recent x-kernel
timer event registered by the handler. In order to suspend
execution until a message is current, a handler utilizes x-
timer event facility and an event semaphore which
is signaled when the timer expires. A channel proxy is also
an x-kernel process with an execution priority or deadline.
The states of all established channels are maintained in a
linked list that is updated during channel signalling.
We extended x-kernel's process management and
semaphore routines to support handler creation, termi-
nation, and synchronization with message insertions and
availability of packet buffers after packet transmissions.
Each packet of a message must inherit the transmission
Category Available Policies
Protocol process-per-channel
processing work-conserving, non-work-conserving
CPU fixed-priority with
scheduling multi-class earliest-deadline-first
Handler cooperative preemption (configurable
execution number of packets between preemptions)
Link multi-class earliest-deadline-first
scheduling (options 1, 2 and
Overload block handler, decay handler deadline,
protection enforce I min , drop overflow messages
Table
2. Implementation policies.
deadline assigned to the message. We modified the BLAST
protocol and message manipulation routines in x-kernel
to associate the message deadline with each packet. Each
outgoing packet carries a global channel identifier, allowing
efficient packet demultiplexing at a receiving node.
CPU scheduling: For multi-class EDF scheduling, three
distinct run queues are maintained for channel handlers, one
for each of the three classes mentioned in Section 3.1, similar
to the link packet queues. Q1 is a priority queue implemented
as a heap ordered by handler deadline while Q2 is
implemented as a FIFO queue. Q3, utilized only when the
protocol processing is work-conserving, is a priority queue
implemented as a heap ordered by the logical arrival time of
the message being processed by the handler. Channel proxies
are also realized as x-kernel threads and are assigned to
Q3. Since Q3 has the lowest priority, proxies do not interfere
with the execution of channel handlers.
The multi-class EDF scheduler is layered above the x-kernel
scheduler (Figure 4(b)). When a channel handler
or proxy is selected from the EDF run queues, the associated
x-kernel process is inserted into a designated x-kernel
priority level for CPU allocation by the x-kernel sched-
uler. To realize this design, we modified x-kernel's context
switch, semaphore, and process management routines ap-
propriately. For example, a context switch between channel
handlers involves enqueuing the currently-active handler in
the EDF run queues, picking another runnable handler, and
invoking the normal x-kernel code to switch process con-
texts. To support cooperative preemption, we added new
routines to check the EDF and x-kernel run queues for waiting
higher-priority handlers or native x-kernel processes, re-
spectively, and yield the CPU accordingly.
Link scheduling: The implementation can be configured
such that link scheduling is performed via a function call in
the currently executing handler's context or in interrupt context
(option 1), or by a dedicated process/thread (option 2),
or by a new thread after each packet transmission (option
3). As demonstrated in [19], option 1 gives the best performance
in terms of throughput and sensitivity of channel admissibility
to P and S; we focus on option 1 below.
The organization of link packet queues is similar to that
of handler run queues, except that Q3 is used for early packets
when protocol processing is work-conserving. After inserting
a packet into the appropriate link packet queue, channel
handlers invoke the scheduler directly as a function call.
If the link is busy, i.e., a packet transmission is in progress,
the function returns immediately and the handler continues
execution. If the link is idle, current packets (if any) are
transferred from Q3 to Q1, and the highest priority packet
is selected for transmission from Q1 or Q2. If Q1 and Q2
are empty, a wakeup event is registered for the time when
the packet at the head of Q3 becomes current. Scheduler
processing is repeated when the network adapter indicates
completion of packet transmission or the wakeup event for
early packets expires.
Traffic enforcement: A channel's message queue
semaphore is initialized to Bmax ; messages overflowing
the message queue are dropped. The packet queue
semaphore is initialized to Bmax \Delta N pkts , the maximum
number of outstanding packets permitted on a channel. On
completion of a packet's transmission, its channel's packet
queue semaphore is signalled to indicate availability of
packet buffers and enable execution of a blocked handler.
If the overflow is due to a violation in Mmax , the handler's
priority/deadline is degraded in proportion to the extra
packets in its payload, so that further consumption of CPU
bandwidth does not affect other well-behaved channels.
Table
2 summarizes the available policies and options.
4.3. System parameterization
Table
1 lists the system parameters for our implementation.
Selection of P and S is based on the tradeoff between available
resources and channel admissibility [19]. The packet
time model presented in Section 3.3 requires
that C x and B x be determined for a given network adapter
and host architecture. An evaluation of the available networking
hardware revealed significant performance-related
deficiencies (poor data transfer throughput; high and unpredictable
packet transmission time) [15]. These deficiences
in the adapter design severely limited our ability to demonstrate
the capabilities of our architecture. Given our focus on
unidirectional data transfer, it suffices to ensure that transmission
of a packet of size s takes L x units. This can
be achieved by emulating a network adapter by consuming
units for each packet being transmitted.
We have implemented such a device emulator, the null
device [19], that can be configured to emulate a desired
packet transmission time. We have used it to study a variety
of tradeoffs,such as the effects of the relationship between
CPU and link processing bandwidth, in the context of QoS-sensitive
protocol processing [19]. We experimentally determined
C x to be - 40-s. For the experiments we select
to correspond to a link (and data transfer) speed
of 50 ns per byte, for an effective packet transmission band-width
(for 4KB packets) of 16 MB/s.
5. Evaluation
We evaluate the efficacy of the proposed architecture in isolating
real-time channels from each other and from best-
Traffic Specification
Ch Mmax Bmax Rmax I min Deadline
2, RT
Table
3. Workload used for the evaluation.
effort traffic. The evaluation is conducted for a subset of
the policies listed in Table 2, under varying degrees of traffic
load and traffic specification violations. In particular,
we evaluate the process-per-channel model with non-work-
conserving multi-class EDF CPU scheduling and non-work-
conserving multi-class EDF link scheduling using option 1
(Section 4.2). Overload protection for packet queue overflows
is provided via blocking of channel handlers; messages
overflowing the message queues are dropped. The parameter
settings of Table 1 are used for the evaluation.
5.1. Methodology and metrics
We chose a workload that stresses the resources on our plat-
form, and is shown in Table 3. Similar results were obtained
for other workloads, including a large number of channels
with a wide variety of deadlines and traffic specifications.
Three real-time channels are established (channel establishment
here is strictly local) with different traffic specifica-
tions. Channels 0 and 1 are bursty while channel 2 is periodic
in nature. Best-effort traffic is realized as channel 3,
with a variable load depending on the experiment, and has
similar semantics as the real-time traffic, i.e., it is unreliable
with no retransmissions under packet loss.
Messages on each real-time channel are generated by an
x-kernel process, running at the highest priority, as specified
by a linear bounded arrival process with bursts of up
to Bmax messages. Rate violations are realized by generating
messages at rates that are multiples of Rmax . The best-effort
traffic generating process is similar, but runs at a priority
lower than that of the real-time generating processes
and higher than the x-kernel priority assigned to channel
handlers. Each experiment's duration corresponds to 32K
packet transmissions; only steady-state behavior is evaluated
by ignoring the first 2K and last 2K packets.
All experiments reported here have traffic enforcement
and CPU and link scheduling enabled. The following metrics
measure per-channel performance. Throughput refers to
the service received by each channel and best-effort traffic.
It is calculated by counting the number of packets successfully
transmitted within the experiment duration. Message
laxity is the difference between the transmission deadline
of a real-time message and the actual time that it completes
transmission. Deadline misses measures the number of real-time
packets missing deadlines. Packet drops measures the
number of packets dropped for both real-time and best-effort
traffic. Deadline misses and packet drops account for QoS
violations on individual channels.
|
4096 | | | | |
|
|
|
|
|
load (KB/s)
Throughput
(KB/s)
(mean)# RT Channel 2 (min)
|
50 | | | |
|
|
|
|
|
|
load (KB/s)
Message
laxity
(ms)
(a) Throughput (b) Message laxity
Figure
5. Maintenance of QoS guarantees when traffic specifications are honored.
5.2. Efficacy of the proposed architecture
Figure
5 depicts the efficacy of the proposed architecture
in maintaining QoS guarantees when all channels honor
their traffic specifications. Figure 5(a) plots the throughput
of each real-time channel and best-effort traffic as a function
of offered best-effort load. Several conclusions can be
drawn from the observed trends. First, all real-time channels
receive their desired bandwidth; since no packets were
dropped (not shown here) or late (Figure 5(b)), the QoS requirements
of all real-time channels are met. Increase in offered
best-effort load has no effect on the service received
by real-time channels. Second, best-effort traffic throughput
increases linearly until system capacity is exceeded; real-time
traffic (early and current) does not deny service to best-effort
traffic. Third, even under extreme overload condi-
tions, best-effort throughput saturates and declines slightly
due to packet drops, without affecting real-time traffic.
Figure
5(b) plots the message laxity for real-time traffic,
also as a function of offered best-effort load. No messages
miss their deadlines, since minimum laxity is non-negative
for all channels. In addition, the mean laxity for real-time
messages is largely unaffected by an increase in best-effort
load, regardless of whether the channel is bursty or not.
Figure
6 demonstrates the same behavior even in the
presence of traffic specification violations by real-time chan-
nels. Channel 0 generates messages at a rate faster than
specified while best-effort traffic is fixed at - 1900 KB/s.
In
Figure
6(a), not only do well-behaved real-time channels
and best-effort traffic receive their expected service, channel
also receives only its expected service. The laxity behavior
is similar to that shown in Figure 5(b). No real-time
packets miss deadlines, including those of channel 0. How-
ever, channel 0 overflows its message queue and drops excess
messages (Figure 6(b)). None of the other real-time
channels or best-effort traffic incur any packet drops.
5.3. Need for cooperative preemption
The preceding results demonstrate that the architectural features
provided are sufficient to maintain QoS guarantees.
The following results demonstrate that these features are
also necessary. In Figure 7(a), protocol processing for best-effort
traffic is non-preemptive. Even though best-effort
traffic is processed at a lower priority than real-time traf-
fic, once the best-effort handler obtains the CPU, it continues
to process messages from the message queue regardless
of any waiting real-time handlers, making CPU scheduling
QoS-insensitive. As can be seen, this introduces a significant
number of deadline misses and packet drops, even
at low best-effort loads. The deadline misses and packet
drops increase with best-effort load until the system capacity
is reached. Subsequently, all excess best-effort traffic is
dropped, while the drops and misses for real-time channels
decline. The behavior is largely unpredictable, in that different
channels are affected differently, and depends on the mix
of channels. This behavior is exacerbated by an increase
in the buffer space allocated to best-effort traffic; the best-effort
handler now runs longer before blocking due to buffer
overflow, thus increasing the window of non-preemptibility.
Figure
7(b) shows the effect of processing real-time messages
with preemption only at message boundaries. Early
handlers are allowed to execute in a work-conserving fashion
but at a priority higher than best-effort traffic. Note that
all real-time traffic is still being shaped since logical arrival
time is enforced. Again, we observe significant deadline
misses and packet drops for all real-time channels. Best-effort
throughput also declines due to early real-time traffic
having higher processing priority. This behavior worsens
when the window of non-preemptibility is increased by
draining the message queue each time a handler executes.
Discussion: The above results demonstrate the need for
cooperative preemption, in addition to traffic enforcement
and CPU scheduling. While CPU and link scheduling were
always enabled, real-time traffic was also shaped via traffic
enforcement. If traffic was not shaped, one would observe
significantly worse real-time and best-effort performance
due to non-conformant traffic. We also note that
a fully-preemptive kernel is likely to have larger, unpredictable
costs for context switches and cache misses. Cooperative
preemption provides greater control over preemption
3000 4000
|
5000 6000
4096 | | | | | |
|
|
|
|
|
Offered load (Ch
Throughput
(KB/s)
2000 3000 4000
30000 | | | | | |
|
|
|
|
|
|
|
Offered load (Ch
Number
of
packets
dropped
(a) Throughput (b) Number of packets dropped
Figure
6. Maintenance of QoS guarantees under violation of Rmax .
points, which in turn improves utilization of resources that
may be used concurrently. For example, a handler can initiate
transmission on the link before yielding to any higher
priority activity; arbitrary preemption may occur before the
handler initiates transmission, thus idling the link.
6. Related work
While we have focused on host communication subsystem
design to implement real-time channels, our implementation
methodology is applicable to other proposals for providing
QoS guarantees in packet-switched networks. A detailed
survey of the proposed techniques can be found in [3].
Similar issues are being examined for provision of integrated
services on the Internet [7, 6]. The expected QoS
requirements of applications and issues involved in sharing
link bandwidthacross multiple classes of traffic are explored
in [24, 10]. The issues involved in providing QoS support
in IP-over-ATM networks are also being explored [5, 22].
The Tenet protocol suite [4] provides real-time communication
on wide-area networks (WANs), but does not incorporate
protocol processing overheads into their network-level
resource management policies. In particular, it does not provide
QoS-sensitive protocol processing inside end hosts.
The need for scheduling protocol processing at priority
levels consistent with the communicating application
was highlighted in [1] and some implementation strategies
demonstrated in [12]. Processor capacity reserves in Real-Time
Mach [21] have been combined with user-level protocol
processing [18] for predictable protocol processing inside
hosts [17]. Operating system support for multimedia
communication is explored in [25, 13]. However, no explicit
support is provided for traffic enforcement or decoupling of
protocol processing priority from application priority. The
Path abstraction [11] provides a rich framework for development
of real-time communication services.
7. Conclusions and future work
We have proposed and evaluated a QoS-sensitive communication
subsystem architecture for end hosts that supports
guaranteed-QoS connections. Using our implementation of
real-time channels, we demonstrated the efficacy with which
the architecture maintains QoS guarantees and delivers reasonable
performance to best-effort traffic. While we evaluated
the architecture for a relatively lightweight stack, such
supportwould be necessary if computationallyintensive services
such as coding, compression, or checksums are added
to the protocol stack. The usefulness of the features also depends
on the relative bandwidths of the CPU and the link.
The proposed architectural features are independent of our
platform, and are generally applicable.
Our work assumes that the network adapter (i.e., the underlying
network) does not provide any explicit support for
QoS guarantees, other than providing a bounded and predictable
packet transmission time. This assumption is valid
for a large class of networks prevalent today, such as FDDI
and switch-based networks. Thus, link scheduling is realized
in software, requiring lower layers of the protocol stack
to be cognizant of the delay-bandwidth characteristics of the
network. A software-based implementation also enables experimentation
with a variety of link sharing policies, especially
if multiple service classes are supported. The architecture
can also be extended to networks providing explicit
support for QoS guarantees, such as ATM.
We are now extending the null device into a sophisticated
network device emulator providing link bandwidth
management, to explore issues involved when interfacing to
adapters with support for QoS guarantees. For true end-to-
guarantees, scheduling of channel handlers must
be integrated with application scheduling. We are currently
implementing the proposed architecture in OSF Mach-RT,
a microkernel-based uniprocessor real-time operating sys-
tem. Finally, we have extended this architecture to shared-memory
multiprocessor multimedia servers [20].
--R
Structure and scheduling in real-time protocol implementations
Support for continuousmedia in the DASH sys- tem
The Tenet real-time protocol suite: Design
Integration of real-time services in an IP-ATM network architecture
Integrated services in the Internet architecture: An overview.
Supporting real-time applications in an integrated services packet network: Architecture and mechanism
A Calculus for Network Delay and a Note on Topologies of Interconnection Networks.
A scheme for real-time channel establishment in wide-area networks
Programming with system resources in support of real-time distributed applications
Scheduling and IPC mechanisms for continuous media.
Workstation support for real-time multimedia communication
Design tradeoffs in implementing real-time channels on bus-based multiprocessor hosts
Predictable communication protocol processing in Real-Time Mach
Resource management for real-time communication: Making theory meet practice
Processor capacity reserves for multimedia operating systems.
ATM signaling support for IP over ATM.
Transport system architecture services for high-performance communications systems
A scheduling service model anda schedulingarchitecture for an integrated services packet network.
The Heidelberg resource administration technique design philosophy and goals.
--TR
--CTR
Binoy Ravindran , Lonnie Welch , Behrooz Shirazi, Resource Management Middleware for Dynamic, DependableReal-Time Systems, Real-Time Systems, v.20 n.2, p.183-196, March 2001
Christopher D. Gill , Jeanna M. Gossett , David Corman , Joseph P. Loyall , Richard E. Schantz , Michael Atighetchi , Douglas C. Schmidt, Integrated Adaptive QoS Management in Middleware: A Case Study, Real-Time Systems, v.29 n.2-3, p.101-130, March 2005
Christopher D. Gill , David L. Levine , Douglas C. Schmidt, The Design and Performance of a Real-Time CORBA SchedulingService, Real-Time Systems, v.20 n.2, p.117-154, March 2001 | traffic enforcement;CPU;QoS-sensitive resource management;and link scheduling;real-time communication |
271016 | A Multiframe Model for Real-Time Tasks. | AbstractThe well-known periodic task model of Liu and Layland [10] assumes a worst-case execution time bound for every task and may be too pessimistic if the worst-case execution time of a task is much longer than the average. In this paper, we give a multiframe real-time task model which allows the execution time of a task to vary from one instance to another by specifying the execution time of a task in terms of a sequence of numbers. We investigate the schedulability problem for this model for the preemptive fixed priority scheduling policy. We show that a significant improvement in the utilization bound can be established in our model. | Introduction
The well-known periodic task model by Liu and Layland(L&L) [1] assumes a worst-case execution
time bound for every task. While this is a reasonable assumption for process-control-type real-time
applications, it may be overly conservative [4] for situations where the average-case execution
time of a task is significantly smaller than that of the worst-case. In the case where it is critical
to ensure the completion of a task before its deadline, the worst-case execution time is used at
the price of excess capacity. Other approaches have been considered to make better use of system
resources when there is substantial excess capacity. For example, many algorithms have been developed
to schedule best-effort tasks for resources unused by hard-real-time periodic tasks; aperiodic
task scheduling has been studied extensively and different aperiodic server algorithms have been
developed to schedule them together with periodic tasks [6, 7, 8, 9]. In [10], etc., the imprecise
computation model is used when a system cannot schedule all the desired computation. We have
also investigated an adaptive scheduling model where the timing parameters of a real-time task may
be parameterized [3]. However, none of the work mentioned above addresses the scheduleability of
real-time tasks when the execution time of a task may vary greatly but follows a known pattern.
In this paper, we propose a multiframe task model which takes into account such execution time
patterns; we shall show that better schedulability bounds can be obtained.
In the multiframe model, the execution time of a task is specified by a finite list of numbers. By
repeating this list, a periodic sequence of numbers is generated such that the execution time of each
instance (frame or job) of the task is bounded above by the corresponding number in the periodic
sequence. Consider the following example. Suppose a computer system is used to track vehicles by
registering the status of every vehicle every 3 time units. To get the complete picture, the computer
takes 3 time units to perform the tracking execution, i.e., the computer is 100% utilized. Suppose
in addition, the computer is required to execute some routine task which takes 1 time unit and
the task is to be executed every 5 time units. Obviously, the computer cannot handle both tasks.
Routine:
Tracking:
Figure
1: Schedule of the Vehicle Tracking System
However, if the tracking task can be relaxed so that it requires only 1 time unit to execute every
other period, then the computer should be able to perform both the tracking and routine tasks (see
the timing diagram in figure 1).
This solution cannot be obtained by the L&L model since the worst-case execution time of the
tracking task is 3, so that the periodic task set in the L&L model is given by f(3; 3); (1; 5)g (the first
component in a pair is the execution time and the second the period). This task set has utilization
factor of 1:2 and is thus unscheduleable. Also notice that we cannot replace the tracking task by a
pair of periodic tasks f(3; 6); (1; 6)g since a scheduler may defer the execution of the (3;
that its first execution extends past the interval [0,3], while in fact it must be finished by time=3.
In this paper, we shall investigate the schedulability of tasks for our multiframe task model
under the preemptive fixed priority scheduling policy. For generality, we allow tasks to be sporadic
instead of periodic. A sporadic task is one whose requests may not occur periodically but there is
a minimum separation between any two successive requests from the same task. A periodic task
is the limiting case of a sporadic task whose requests arrive at the maximum allowable rate. We
shall establish the utilization bounds for our model which will be shown to subsume the L&L result
[1]. To obtain these results, however, we require the execution times of the multiframe tasks to
satisfy a fairly liberal constraint. It will be seen that the schedulability bounds can be improved
substantially if there is a large variance between the peak and non-peak execution time of a task.
Using the multiframe model, we can safely admit more real-time tasks than the L&L model.
The paper is organized as follows. Section 2 presents our multiframe real-time task model,
defines some terminology, and prove some basic results about scheduling multiframe tasks. Section
3 investigates the schedulability bound of the fixed priority scheduler for the multiframe model.
Section 4 is the conclusion.
2 The Multiframe Task Model
For the rest of the paper, we shall assume that time values have the domain the set of non-negative
real numbers. All timing parameters in the following definitions are non-negative real numbers.
We remark that all our results will still hold if the domain of time is the non-negative integers.
real-time task is a tuple (\Gamma; P ), where \Gamma is an array of N execution
times is the minimum separation time, i.e., the ready
times of two consecutive frames (requests) must be at least P time units apart. The execution time
of the ith frame of the task is C ((i\Gamma1) mod N) , where 1 - i. The deadline of each frame is P after
its ready time.
For example, is a multiframe task with a minimum separation time of 2. Its
execution time alternates between 2 and 1. When the separation between two consecutive ready
times is always P and the ready time of the first frame of a task is at time=0, the task reduces to
a periodic task.
In the proofs to follow, we shall often associate a multiframe task whose \Gamma has only one element
(i.e., N=1) with a periodic task in the L&L model which has the same execution time and whose
period is the same as the minimum separation of the multiframe task. For example, task
((1); 5) has only one execution time and its corresponding L&L task is (1; 5). We shall call a
periodic task in the L&L model an L&L task, and whenever there is no confusion, we shall call a
multiframe task simply a task.
Consider a task which has more than one execution time. Let
We shall call C m the peak execution time of task T . We shall call the pair
corresponding L&L task of the multiframe task T .
For a set S of n tasks fT
We call U
the peak utilization factor of S.
We call U
the maximum average utilization factor of S:
Given a scheduling policy A, we call U m
A the utilization bound of A if for any task set S, S is
scheduleable by A whenever U m - U m
A ,
We note that U m is also the utilization factor of S's corresponding L&L task set.
Example 1 Consider the task set 5)g. Its corresponding L&L task
set S 0 isfT 0
5)g. The peak utilization factor of S is U 1:2. The maximum
average utilization factor of S is U
A pessimistic way to analyze the scheduleability of a multiframe task set is to consider the
schedulability of its corresponding L&L task set. This, however, may result in rejecting many task
sets which actually are scheduleable. For example, the task set in Example 1 will be rejected if we
use the L&L model, whereas it is actually scheduleable by a fixed priority scheduler under RMA
(Rate Monotonic Assignment), as we shall show later.
respect to a scheduling policy A, a task set is said to be fully utilizing the
processor if it is scheduleable by A, but increasing the execution time of any frame of any task will
result in the modified task set being unscheduleable by A.
We note that U m
A is the greatest lower bound of all fully utilizing task sets with respect to the
scheduling policy A.
Lemma 1 For any scheduling policy A, U m
A - 1.
Proof. We shall prove this by contradiction. Suppose there is an U m
A larger than 1, we arbitrarily
form a task set
A where C m
i is the peak
execution time of T i . So we have \Sigma n
When the peak frames of all the tasks start at the
same time, they cannot be all finished within P by any scheduler, which violates the definition of
A . So U m
A cannot exceed 1. QED.
Suppose A is a scheduling policy which can be used to schedule both multiframe and
L&L task sets. Let the utilization bound of A be U m
A for multiframe task sets. Let the utilization
bound of A for the corresponding L&L task sets be U c
A . Then U m
A .
Proof. Proof is by contradiction. Consider a task set S of size n. Suppose U m - U c
A and the set
is unscheduleable. Its corresponding L&L task set S 0 has the same utilization factor as U m . S 0 is
scheduleable.
Suppose the ith frame of task T j miss its deadline at time t j . For every task T
locate the time point t k which is the ready time of the latest frame of T k such that t k - t j . We
transform the ready time pattern as follows. In the interval from 0 to t k , we push the ready times
of all frames toward t k so that the separation times of all consecutive frames are all equal to P k .
We now set all execution times to be the peak execution time. If t some k, we add
more peak frames of T k at its maximum rate in the interval between t k and t j . The transformed
ready time pattern is at least as stringent as the original case. So the ith frame of T j still misses
its deadline. However, the transformed case is actually a ready time pattern of S 0 which should be
scheduleable, hence a contradiction QED.
Is the inequality in Lemma 2 strict? Intuitively, if U m of a task set is larger than U c
A and there
is not much frame-by-frame variance in the execution times of the tasks in the set, then the task set
is unlikely to be scheduleable. However, if the variance is sufficiently big, then the same scheduling
policy will admit more tasks. This can be quantified by determining the utilization bound for our
task model. We shall show how to establish an exact bound if the execution times of the tasks
satisfy a rather liberal restriction.
be the maximum in an array of execution times (C
array is said to be AM (Accumulatively Monotonic) if \Sigma m+j
is said to be AM if its array of
execution times is AM.
Intuitively, an AM task is a task whose total execution time for any sequence of L - 1 frames
is the largest among all size-L frame sequences when the first frame in the sequence is the frame
with the peak execution time. For instance, all tasks in Example 1 are AM. We note that tasks in
multimedia applications usually satisfy this restriction.
In the following section, we assume that all tasks satisfy the AM property. It will be seen that
without loss of generality, we can assume that the first component of the array of execution time
of every task is its peak execution time, i.e., C m =C 0
Fixed Priority Scheduling
In this section, we shall show that, for the preemptive fixed-priority scheduling policy, the multiframe
task model does have a higher utilization bound than the L&L model if we consider the
execution time variance explicitly. The utilization bound for the L&L model is given by the following
theorem in the much cited paper [1].
Theorem 1 (Theorem 5 from [1]) For L&L task sets of size n, the utilization bound of the
preemptive fixed priority schuduling policy is n(2 1=n \Gamma 1).
Definition 5 The critical instance of a multiframe task is the period when its peak execution time
is requested simultaneously with the peak execution times of all higher priority tasks, and all higher
priority tasks request execution at the maximum rate.
Theorem 2 For the preemptive fixed priority scheduling policy, a multiframe task is scheduleable
if it is scheduleable in its critical instance.
Proof. Suppose a task T scheduleable in its critical instance. We shall prove that
all its frames are scheduleable regardless of their ready times.
First, we prove that the first frame of T k is scheduleable. Let T k be ready at time t and its first
frame finishes at t end . We trace backward in time from time=t to locate a point t 0 when none of
the higher priority tasks was being executed. t 0 always exists, since at time 0 no task is scheduled.
Let us pretend that T k 's first frame becomes ready at time t 0 . It will still finish at time t end . Now
let us shift the ready time pattern of each higher priority task such that its frame which becomes
ready after t 0 now becomes ready at t 0 . This will only postpone the finish time of T k 's first frame
to a point no earlier than t end , say t
end . In other words, T end - t 0
end . Then for each higher priority
task, we shift the ready time of every frame after t 0 toward time=0, so that the separation between
two consecutive frames is always the minimum separation time. This will further postpone the
finish time of T k 's first frame to no earlier than T 0
end . In other words, t 0
end . Now,
we shift all higher priority tasks by by frames until the peak frame starts at t 0 . Since \Gamma k is AM,
this shifting has the effect of postponing the finish time of T k to t 000
end . By construction,
the resulting request pattern is the critical instance for T k . Since T is scheduleable in its critical
instance, we have t 000
first frame is scheduleable.
Next, we prove that all other frames of T k are also scheduleable. This is done by induction.
Induction base case: The first frame of T k is scheduleable.
Induction step: Suppose first i frames of T k are scheduleable. Let us consider the (i+1)th frame
and apply the same argument as before. Suppose that this frame starts at time t and finishes at
t end . Again, we trace backward from t along the time line until we hit a point t 0 when no higher
priority tasks is being executed. t 0 always exists, since no higher priority task is being executed
at the finish time of the ith frame. Let the (i + 1)th frame start at time t 0 . It will still finish
at time t end . Now shift the requests of each higher priority task such that its frame which starts
after t 0 now starts at t 0 . This will only postpone the finish time of T k 's (i 1)th frame to a point
in time no earlier than t end , say t
end . Then for each higher priority task, we
shift the ready time of every frame after t 0 toward time=0 so that the separation time between
any two consecutive frames is always the minimum separation time of the task. This will further
postpone the finish time of T k 's (i 1)th frame to no earlier than T 0
end . In other words,
end . Now for all higher priority tasks, we shift them by frames until the peak frames start
at t 0 . Again, since \Gamma is AM, this further postpones the finish time of T k 's (i 1)th frame to t 000
end . This last case is actually the critical instance for T k . Since T k is scheduleable in
its critical instance, we have t 000
1)th frame is
also scheduleable. We have thus proved the theorem. QED.
We shall say that a task passes its critical instance test if it is scheduleable in its critical instance.
Corollary 1 A task set is scheduleable by a fixed priority scheduler if all its tasks pass the critical
instance test.
From now on, we can assume, without loss of generality that C 0 is the peak execution time of
a task without affecting the schedulability of the task set. This is because we can always replace
a task T whose peek execution time is not in the first frame by one whose execution time array is
obtained by rotating T 's array so that the peek execution time is C 0 . From the argument in the
proof of theorem 2, it is clear that such a task replacement does not affect the result of the critical
instance test.
Example 2 Task set f((2; 1); 3); ((3); 7)g is scheduleable under rate-monotonic assignment. U
tasks pass their critical instance test. However, its corresponding
L&L task set f(2; 3); (3; 7)g is unscheduleable under any fixed priority assignment.
Example 3 The L&L task set f(3; 3); (1; 5)g with utilization factor 1:2 is obviously unscheduleable
by any scheduling policy. However, if the requirement of the first task is relaxed such that every
other frame needs only 1 time unit, the task set becomes scheduleable by RMA. This is because
the relaxed case is given by the multiframe task set f((3; 1); 3); ((1); 5)g which passes the critical
instance test.
We remark that the Example 3 above specifies the vehicle tracking system mentioned at
the beginning of this paper. From the above argument, we can now establish its schedulability.
These examples also show that even if the total peak utilization exceeds 1, a task set may still be
schedulable. Of course, the average utilization must not be larger than 1 for scheduleability.
The complexity of the scheduleability test based on Corollary 1 is O(P ), where P is the biggest
period.
Theorem 3 If a feasible priority assignment exists for some multiframe task set, the rate-monotonic
priority assignment is feasible for that task set.
Proof. Suppose a feasible priority assignment exists for a task set. Let T i and T j be two tasks
of adjacent priority in such an assignment with T i being the higher priority one. Suppose that
us interchange the priorities of T i and T j . It is not difficult to see that the resultant
priority assignment is still feasible by checking the critical instances. The rate-monotonic priority
assignment can be obtained from any priority ordering by a finite sequence of pairwise priority
reordering as above. QED.
To compute the utilization bound, we need the following lemma.
Definition 6 Let \Psi(n; ff) denote the minimum of the expression \Sigma
subject to the constraint: 2.
Proof. With the substitution x
Pn , we can compute
minimize \Sigma n
subject to x
This is a strictly convex problem. There is a unique critical point which is the absolute minimum.
The symmetry of the minimization problem in its variables means that all x i 's are equal in the
solution. So we have x
Definition 7 A task set is said to be extremely utilizing the processor if it is scheduleable but
increasing peak execution time of the lowest priority task by any amount will result in a task set
which is unscheduleable.
We shall use U e to denote the greatest lower bound of all extremely utilizing task sets.
It is important to note the distinction between fully utilizing and extremely utilizing task sets.
It is crucial to the proof of Lemma 4 and Lemma 5.
Lemma 4 Consider all task sets of size n satisfying the restriction
r
Proof. From Theorem 2 and Theorem 3, we only need to consider the case where all tasks
start at time 0 and request at their maximum rates thereafter. We can use rate-monotonic priority
assignment and check for scheduleability in the interval from time 0 to P n . Since
we know that only C 0 and C 1 are involved in all the critical instance tests.
First, the utilization bound corresponds to the case where every C 0 =C 1 equals r, since we can
increase C 1 without changing U m . And increasing C 1 will only take more CPU time. So in the
following proof we assume that all the ratios are equal to r.
For any scheduleable and extremely utilizing task set S with U shall prove four
claims.
1: The second request of T must be finished before P n .
Suppose ffi of C 1
i is scheduled after P n , we can derive a new task set S 0 by only changing the
following execution times of T i and T n ,
and arbitrarily reducing other execution times of T i to maintain the AM property of the execution
time arrays. It is easy to show that S 0 is schedulable and also extremely utilizes the processor.
This contradicts the assumption that U e is the minimum of all extremely utilizing task set. So
the second request of any T i should be completed before P n .
2: If
0, we can derive a new task set S 0 by only changing the following execution times of T i
and T n , and arbitrarily reducing other execution times of T i to maintain the AM property of the
execution time arrays.
It is easy to check that S 0 is scheduleable and also extremely utilizes the processor.
This contradicts the assumption that U e is the minimum. So C 0
3: If
n should be finished before P i .
Instead of proving claim 3, we prove the following equivalent claim:
Consider an extreme utilizing task set S satisfying claim 1 and claim 2. If the last part of C 0
finishes between P i and P i+1 , and
does not correspond to the minimal case.
As in claim 2, we can derive a new task set S 0 by only changing the following execution times
of T i and T n , and arbitrarily reducing other execution times of T n to maintain the AM property of
the execution time arrays.
Suppose P j is the smallest value satisfying
According to claim 1 and
claim 2, the second requests of all tasks other than T n are scheduled between P j and P n . Since
we know the first requests of all tasks other than T n are all scheduled before P j .
extremely utilizes the CPU, we know that the part of C n scheduled before P j is larger
than that scheduled after P j . This guarantees that the new task set S 0 is still scheduleable and
extremely utilizes the CPU.
r
Hence, the task set S cannot be the minimal case. This establishes claim 3.
then the second request of T should be completed exactly
at time P i+1 .
If the second request of T completes ahead of P i+1 , the processor will idle between
its completion time and P i+1 , which shows S does not extremely utilize processor. So this cannot
be true.
If ffi of the second request of T derive a new task set S 0 by
only changing the following execution times of T i and T i+1 , and arbitrarily reducing other execution
times of T i to maintain the AM property of the execution time arrays.
Again it is easy to check that S 0 is schedulable and also extremely utilizes the process.
This contradicts the assumption that U e is the minimum.
So the second request of T should be completed exactly at time T i+1 .
From these four claims and Lemma 3, we can conclude:
r
sets of size n, U
r
Proof. Again, we assume all C 0 =C 1 equals r, and all tasks request at the maximum rate. For
any task T i in an extremely utilizing task set with P i
i such that P 0
increase C 0
n by the amount needed to again extremely utilize the processor. This increase is smaller
than C 1
1). Let the old and new utilization factors be U m and U 0m respectively.
Therefore we can conclude that the minimum utilization occurs among task sets in which the longest
period is no larger than twice of the shortest period. This establishes Lemma 5. QED
Theorem 4 Let
sets of size n, the utilization bound is given by
r
r
Proof. By definition, the least upper bound is the minimum of the U e for task sets of size
ranging from 1 to n, and we have min n
r
r
We observe that Liu and Layland's Theorem 1 is a special case of Theorem 4 with
the frame separation time equals the period.
The following tables summarize the relative advantage of using the multiframe model over the
L&L model in determining whether a set of task is scheduleable. The column under UL&L gives
the utilization bound in the L&L model.
5 0.743 13.6 19.5 22.8 24.9 26.3 27.4 28.2 28.9 29.4 34.5
50 0.698 16.7 24.0 28.2 30.8 32.7 34.1 35.2 36.0 36.7 43.3
100 0.696 16.8 24.3 28.5 31.2 33.1 34.5 35.5 36.4 37.1 43.8
Table
1: Utilization Bound Percentage Improvement
CONCLUSION AND FUTURE RESEARCH 17
Table
1 shows the percentage improvement of our bound over the Liu and Layland bound.
Specifically, the table entries denote 100 (U m =UL&L \Gamma 1), for different combination of r (the ratio
of the peak execution time to the execution time of the second frame) and n (the number of tasks
in the task set). For example, suppose we have a system capable of processing one Gigabyte of data
per second, and a set of tasks each of which needs to process one Megabyte of data per second.
Using a utilization bound of ln 2, we can only allow 693 tasks. By Theorem 4, we can allow at
least 863 tasks (over 24% improvement) when r - 3.
As r increases, the bound improvement increases. Actually, as r ! 1, a simple calculation
shows that the bound ! 1. This says that our model excels when the execution time of the task
varies sharply.
It is also interesting to compare the maximum average utilization with L&L bound. However,
the maximum average utilization factor may be arbitrarily low even if the maximum utilization
factor is very high. One simple example is f((10; 5; 10)g. So, we take instead the
average of the first two frames of the task. In Table 2 we calculate 100 ( 1
r ))=U L&L ,
which is the ratio of the biggest possible maximum average utilization factor to Liu and Layland
bound.
Table
3 shows we can still maintain good overall system utilization when task execution
time varies.
4 Conclusion and Future Research
In this paper, we give a multiframe model for real-time tasks which is more amenable to specifying
tasks whose execution time varies from one instance to another. In our model, the execution times
of successive instances of a task is specified by a finite array of numbers rather than a single number
which is the worst-case execution time of the classical Liu and Layland model.
Using the new model, we derive the utilization bound for the preemptive fixed priority scheduler,
under the assumption that the execution time array of the tasks satisfies the AM (Accumulative
CONCLUSION AND FUTURE RESEARCH
3 0.780 83.5 77.4 74.3 72.3 71.0 70.0 69.3 68.8 68.3 64.1
5 0.743 85.2 79.7 76.7 74.9 73.7 72.8 72.1 71.6 71.2 67.3
100 0.696 87.6 82.8 80.3 78.7 77.6 76.8 76.2 75.8 75.4 71.9
Table
2: Ratio of maximum average to L&L bound
Monotonic) property. This property is rather liberal and is consistent with common encoding
schemes in multimedia applications, where one of the execution times in an array "dominates" the
others. We show that significant improvement in the utilization bound over the Liu and Layland
model results from using our model. This is useful in dynamic applications where the number of
tasks can vary and the figure of merit for resource allocation is the number of tasks that the system
can admit without causing timing failures.
Work is under way to apply this model to real-life applications such as video stream scheduling
and will be reported in the future.
--R
Scheduling Algorithms for Multiprogramming in a Hard- Real-Time Environment
Fundamental Design Problems of Distributed systems for the Hard-Real-Time Envi- ronment
Load Adjustment in Adaptive Real-Time Systems
The Rate Monotonic Scheduling Algorithm - Exact Characterization and Average Case Behavior
Assigning Real-Time Tasks to Homogeneous Multiprocessor Systems
The Deferrable Server Algorithm for Enhanced Aperiodic Responsiveness in Hard Real-Time Environments
A Practical Method for Increasing Processor Utilization
Aperiodic Servers in a Deadline Scheduling Environment
Aperiodic Task Scheduling for Hard Real-Time Systems
Scheduling Periodic Jobs That Allow Imprecise Results
--TR
--CTR
Tei-Wei Kuo , Li-Pin Chang , Yu-Hua Liu , Kwei-Jay Lin, Efficient Online Schedulability Tests for Real-Time Systems, IEEE Transactions on Software Engineering, v.29 n.8, p.734-751, August
Marek Jersak , Rafik Henia , Rolf Ernst, Context-Aware Performance Analysis for Efficient Embedded System Design, Proceedings of the conference on Design, automation and test in Europe, p.21046, February 16-20, 2004
Jrn Migge , Alain Jean-Marie , Nicolas Navet, Timing analysis of compound scheduling policies: application to posix1003.1b, Journal of Scheduling, v.6 n.5, p.457-482, September/October
Samarjit Chakraborty , Thomas Erlebach , Simon Knzli , Lothar Thiele, Schedulability of event-driven code blocks in real-time embedded systems, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Sanjoy K. Baruah, Dynamic- and Static-priority Scheduling of Recurring Real-time Tasks, Real-Time Systems, v.24 n.1, p.93-128, January
Chang-Gun Lee , Lui Sha , Avinash Peddi, Enhanced Utilization Bounds for QoS Management, IEEE Transactions on Computers, v.53 n.2, p.187-200, February 2004
Michael A. Palis, The Granularity Metric for Fine-Grain Real-Time Scheduling, IEEE Transactions on Computers, v.54 n.12, p.1572-1583, December 2005
Tarek F. Abdelzaher , Vivek Sharma , Chenyang Lu, A Utilization Bound for Aperiodic Tasks and Priority Driven Scheduling, IEEE Transactions on Computers, v.53 n.3, p.334-350, March 2004
Christopher D. Gill , David L. Levine , Douglas C. Schmidt, The Design and Performance of a Real-Time CORBA SchedulingService, Real-Time Systems, v.20 n.2, p.117-154, March 2001
Chin-Fu Kuo , Tei-Wei Kuo , Cheng Chang, Real-Time Digital Signal Processing of Phased Array Radars, IEEE Transactions on Parallel and Distributed Systems, v.14 n.5, p.433-446, May
Lui Sha , Tarek Abdelzaher , Karl-Erik rzn , Anton Cervin , Theodore Baker , Alan Burns , Giorgio Buttazzo , Marco Caccamo , John Lehoczky , Aloysius K. Mok, Real Time Scheduling Theory: A Historical Perspective, Real-Time Systems, v.28 n.2-3, p.101-155, November-December 2004 | utilization bound;task model;scheduling;real-time |
271027 | An interaction of coherence protocols and memory consistency models in DSM systems. | Coherence protocols and memory consistency models are two improtant issues in hardware coherent shared memory multiprocessors and softare distributed shared memory(DSM) systems. Over the years, many researchers have made extensive study on these two issues repectively. However, the interaction between them has not been studied in the literature. In this paper, we study the coherence protocols and memory consistency models used by hardware and software DSM systems in detail. Based on our analysis, we draw a general definition for memory consistency model, i.e., memory consistency model is the logical sum of the ordering of events in each processor and coherence protocol. We also point that in hardware DSM system the emphasis of memory consistency model is relaxing the restriction of event ordering, while in software DSM system, memory consistency model focuses mainly on relaxing coherence protocol. Taking Lazy Release Consistency(LRC) as an example, we analyze the relationship between coherence protocols and memory consistency models in software DSM systems, and find that whether the advantages of LRC can be exploited or not depends greatly on it's corresponding protocol. We draw the conclusion that the more relaxed consistency model is, the more relaxed coherence protocol needed to support it. This conclusion is very useful when we design a new consistency model. Furthermore, we make some improvements on traditional multiple writer protocol, and as far as we aware, we describe the complex state transition for multiple writer protocol for the first time. In the end, we list the main research directions for memory consistency models in hardware and software DSM systems. | Introduction
Distributed Shared Memory(DSM) systems have gained popular acceptance by combining the scalability
and low cost of distributed system with the ease of use of the single address space. Generally,
there are two methods to implement DSM systems: hardware vs software. Cache-Coherent Non-
Uniform-Memory-Access Multiprocessors(CC-NUMA) are the dominant direction of hardware DSM
The work of this paper is supported by the CLIMBING Program and the President Foundation of the Chinese
Academy of Sciences.
systems. To date, there are many commercial and research systems, such as Stanford DASH[34],
Stanford FLASH[31], MIT Alewife[5], StartT-Voyager[10], SGI Origin series. While in software
DSM alternatives, the general method is supplying a high-level shared memory abstraction on the
top of underlying messaging passing based system, such as multicomputers and LAN-connected
NOW. For example, Rice Munin[13], Rice TreadMarks[19], Princeton IVY[33], CMU Midwary[12],
Utah Quarks[28], Marland CVM[26], DIKU CarlOS[29] are current commercial or research software
DSM systems in the world.
Cache coherences and memory consistency models are two important issues in CC-NUMA ar-
chitecture. Cache coherence is mainly used to keep multiple copies of one cache block consistent
to all processors, while the role of memory consistency models is specifying how memory behaviors
with respect to read and write operations from multiple processors. Both of these two issues
have gained extensive study in the past 20 years. There are several cache coherence protocols
and memory consistency models have been proposed in the literature. Coherence protocols include
write-invalidate, write-update, delay-update. Memory consistency models include sequential
consistency[32], processor consistency[22] [21], weak ordering[3], release consistency[21] et al.
Coherent problem remains in software DSM systems. The difference between hardware and
software implementation is the granularity of coherence unit. In software DSM systems the coherence
unit is page, while in hardware DSM systems the coherence unit is cache block line.
Since the role of page in software DSM systems is similar to that of cache in hardware DSM
systems, for simplicity, we will use cache coherence in both cases in the rest of this paper. The
coherence protocols widely adopted in software DSM systems include multiple writer protocol
(i.e., write-shared protocol)[13], single writer protocol[26][8]. The memory consistency models are
mainly used to reduce the frequency of communication and message traffic[27]. Examples of relaxed
consistency models adopted in software DSM systems include eager release consistency[13],
lazy release consistency[27], entry consistency[12], automatic update release consistency[23], scope
consistency[24], home-based lazy release consistency[43], singer-writer lazy release consistency [26],
message-driven release consistency[29], and affinity entry consistency[6].
In fact, both cache coherence protocols and memory consistency models describe the behaviours
of DSM systems. They are interdependent on each other and the relationship between them is very
complex. Up to now as we know, there is no research on this problem.
In this paper, we first describe a clear understanding about coherence and memory consistency
model and propose a general definition for memory consistency model. We also point that in hardware
DSM system the emphasis of memory consistency model is relaxing the restriction of event
ordering, while in software DSM system, memory consistency model focuses mainly on relaxing
coherence protocol. Taking Lazy Release Consistency(LRC) as an example, we analyze the relationship
between coherence protocols and memory consistency models in software DSM systems.
We draw the conclusion that the more relaxed consistency model is, the more relaxed coherence
protocol needed to support it. This conclusion is very useful when we design a new consistency
model. Furthermore, we make some improvements on traditional multiple writer protocol by adding
a new state, and as far as we aware, we describe the complex state transition for multiple writer
protocol for the first time. In the end, we list the main research directions for memory consistency
models in hardware and software DSM systems.
The remainder of the paper is organized as follows. In section 2 we briefly overview the development
of coherence protocols and memory consistency models, and propose a general definition
for memory consistency model. Taking LRC as an example, the relationship between coherence
protocols and memory consistency models is deduced in section 3. Furthermore, the state transition
graph for new improved multiple writer protocol is shown in section 3. Related works and
conclusions remarks are listed in section 4 and section 5 respectively.
General Definition for Memory Consistency Model
Censier and Feautrier defined a coherent memory scheme as follows[14]:
Definition 2.1: A memory scheme is coherent if the value returned on a LOAD instruction is
always the value given by the latest STORE instruction with the same address.
This definition, while intuitively appealing, is vague and simplistic. The reality is much more
complex. In a computer system where STORE can be buffered in a store buffer, it is not clear
whether "last STORE" refer to the execution of the STORE by a processor, or to the update of
memory. In fact, the above definition contains two different aspects of memory system behaviours,
both of which are critical to writing correct shared memory programs. The first aspect, called
coherence, defines what values can be returned by a read operation. The second aspect, called
event ordering in each processor, determines when a written value will be returned by a read
operation. Coherence ensures that multiple processors see a coherent view of the same location,
while event ordering in a processor describes that when other processor sees a value that has been
updated by this processor. In [38] Hennessy and Patterson presented the sufficient conditions to
coherence as follows.
1. A read by a processor, P, to a location X that follows a write by P to X, with no writes of X
by another processor occurring between the write and the read by P, always returns the value
written by P.
2. A read by a processor to location X that follows a write by another processor to X returns the
written value if the read and write are sufficiently separated and no other writes to X occur
between the two accesses.
3. Writes to the same location are serialized: that is, two writes to the same location by any two
processors are seen in the same order by all processors. For example, if the values 1 and then
are written to a location, processors can never read the value of the location as 2 and then
later read it as 1.
The above three conditions only guarantee that all the processors have a coherent view about position
X. However, Since they don't tell us the event ordering in each processor, we can't determine
when a written value will be seen by other processors. As such, in order to capture the behaviors
of memory system accurately, we need to restrict both the coherence and the event ordering, this
is the role of memory consistency model. We will use the strictest memory consistency model
-sequential consistency as an example to explain this in the following.
Scheurish and Dubois[39] described a sufficient condition to sequential consistency as follows:
1. All processors issue memory accesses in program order.
2. If a processor issues a STORE, then the processor may not issue another memory access until
the value written has become accessible by all other processors.
3. If a processor issues a LOAD, then the processor may not issue another memory access until
the value which is to be read has both been bound to the LOAD operation and become accessible
to all other processors.
From above two sufficient conditions for coherence and consistency, we see that coherence and consistency
are related tightly. The conditions of coherence is a subset of the conditions of memory
consistency model. The former considers the different events from different processors to the same
location only, while consistency model not only considers the different events to the same location
but also imposes constraints on the ordering of event within each processor, that is, the execution
order in each processor. As such, the combination of cache coherence and event ordering will determine
the behaviour of whole memory system. Based on our understanding, we propose a general
definition for memory consistency model to reveal the relationship between them.
memory consistency model = coherence protocol event ordering in each processor
Some researchers use event ordering and memory consistency model interchangely. However, we
believe that our new understanding can describe the memory system behaviours more accurately
than them.
In our new definition, the role of coherence is to ensure that the coherent view of the given
memory location by multiple processors. Ordering of events describes the happen sequence of
memory events issued by each processor. Here, we use memory events to represent the read and
store operation to the memory[17]. Memory consistency model is the logical sum of coherence
protocol and event ordering in each processor. For example, sequential consistency, defined by
Lamport in 1979[32], can be viewed as two conditions[2]:
1. all memory access appear to execute automatically in some total order.
2. all memory accesses of each processor appear to execute in an order specified by its programmer
The first condition, atomicity, is ensured by coherence protocol. There are two base kinds of cache
coherence protocols: write invalidate and write update[41]. Write update protocol ensures that all
processors which keep copies will see the new value simultaneously with modified processor, while
invalidate protocol reaches the coherent view by invalidate all other copies.
The second condition, program order, is determined by the executing order of events. If we
restrict all the events in one processor to be issued, performed in program order, this condition will
be satisfied. However, the above constraint is too strict to improve the performance. Especially,
many hardware and compiler optimizations, such as write buffer, look-up free, non-binding reading,
register allocation, dynamic scheduling, multiple issues, can't be utilized in this memory consistency
model.
These two conditions for sequential consistency model show that there are two directions for
us to relax this strictest consistency model. In hardware DSM systems, over the years, there are
several consistency models proposed to exploit hardware and complier optimization techniques.
Such as, processor consistency, weakly order, weak ordering(new definition), release consistency,
DRF1, PLpc model[20]. Almost all the hardware and compiler optimization techniques have been
exploited in PLpc memory consistency model. In these relaxed consistency models, however, only
the event ordering in each processor is relaxed, such as W!R, W!W, R!R, R!W operations
in successive synchronization operation, while the atomicity demand changes little. For example,
in RC, all the operations between two synchronization operations can be completed out of order if
no data dependence is violated. However, the atomicity is maintained by write invalidate protocol
which is similar to that in sequential consistency model.
In software DSM systems, the communication overhead is so expensive that the cost for maintaining
atomicity is more expensive than that in hardware DSM systems. Therefore, reducing the
frequency of communication and message traffic is more important in software DSM systems than
in hardware DSM systems. Furthermore, since the coherence unit in software DSM system is page
or larger than page[9], the false sharing problem is more serious than that in hardware DSM sys-
tems. As such, how to eliminate the false sharing problem effectively is also an important issue in
software DSM systems.
Before discussing the solution for false sharing, we first give a clear description about false
sharing. False sharing occurs when two processors access logically unrelated data that happen to fall
on the same page and at least one of them writes the data, causing the coherence mechanisms to
ping-pong the page between these processors. It includes two categories: write-write false sharing and
write-read false sharing. False sharing will entail many useless communication among processors[25].
In order to reduce the communication overhead in software DSM systems, several new memory
consistency models are proposed in the past. The represented consistency models for software
DSM systems include Eager Release Consistency(ERC) in Munin[13], Lazy Release Consis-
in TreadMarks [27], Entry Consistency(EC) in Midway[12], Automatic Update Release
Consistency(AURC) in SHRIMP[23], Scope Consistency(ScC)[24], Home-based Lazy Release
Consistency(HLRC)[43], Singer Writer Lazy Release Consistency(SW-LRC)[26], Message-Driven
Release Consistency(MDRC) in CarlOS [29], and Affinity Entry Consistency[6] in NCP2.
Among these consistency models, the ordering of events in each processor is similar to one
another, in other words, the main difference of these models is the method to maintain coherence.
This includes two aspects: (1) Which coherence protocol is used;(2) How this protocol is imple-
mented. All the coherence protocols adopted in software DSM systems are more relaxed than the
strict single writer protocol used in hardware DSM systems. Relaxed coherence protocols include
write-shared protocol(multiple-writer)[13], relaxed single-writer protocol(or, delay invalidate)[26] 1 .
For example, both TreadMarks and Munin systems use multiple writer protocol, while CVM adopts
relaxed single writer protocol.
Among above described memory consistency models, the main difference lies in when the up-dated
value made by one processor can be available by other processors. For example, in LRC,
when other processors execute an acquire operation , they will see all the updated value modified
before it in happen-before-1 partial order. However, in EC and ScC, only the shared data associate
with the same synchronization object are available when one processor acquires a synchronization
object.
Based on the analysis of hardware and software DSM systems in this section, we see the coherence
protocol and event ordering in each processor are two important parts of memory consistency
model. In particular, in software DSM system the difference among different consistency models
depends greatly on corresponding coherence protocols. we will analyze their relationship in detail
in the following section.In order to differentiate this improved single writer protocol from strict single writer protocol, we use relaxed
single-writer to represent it here. In next section, the operation procedure of relaxed single writer protocol will be
shown.
3 Interaction between Coherence and Consistency in LRC
In this section, we take lazy release consistency as an example to analyze the relationship between
coherence protocol and consistency model in detail.
3.1 Lazy Release Consistency
LRC[27] is an all-software, page-based, write-invalidated based multiple writer memory consistency
model. It has been implemented in TreadMarks system[19]. Since the objective of this section is
analyzing the interaction between coherence protocol and consistency model, we describe the key
idea of lazy release consistency in this subsection , and multiple writer protocol will be described
in the following subsection. LRC is a lazy implementation of RC[21] or ERC[13]. It delays the
propagation of modifications to a processor until that processor executes an acquire operation.
The main idea of the consistency model is to use timestamps, or intervals to establish the happen-
before-1 ordering between causal-related events. Local intervals are delimited by synchronization
events, such as LOCKs, BARRIERs. LRC uses the happen- before-1 partial order to maintain
ordering of events. The happen-before-1 partial order is the union of the total processor order of
the memory accesses on each individual processor and the partial order of release-acquire pairs.
Vector timestamps are used to represent the partial order. When a processor executes an acquire,
it sends its current vector timestamp in the acquire message. The last releaser then piggybacks on
its response a set of write notices. The write notices include the processor id, the vector timestamp
for the interval during which the page was modified and the modified page number. A faulting
processor uses write notices to locate and apply the modifications required to update its copy of the
page. Both write-invalidate and write-update protocols can be used to implement above algorithms.
In the following discussion, we assume that write invalidate protocol is used.
3.2 Multiple Writer Protocol
Multiple Writer(MW) protocol has been developed to address the write-write false sharing problem.
With a MW protocol, two or more processors can modify their local copies of a shared page
simultaneously. Their modifications are merged at the next synchronization operation. Therefore
the effect of false sharing is reduced.
There are two key issues to implement MW protocol: write trapping and write collection. Where
write trapping is the method to detect what shared memory location has been changed, twinning
and software dirty bit[12] are two general methods used in software DSM systems. Write collection
refers to the mechanism used for determining what modified data needs to be propagated to the
acquirer. Timestamp and diffing are two methods used in Midway and TreadMarks respectively.
For more details, please refer to [8]. In TreadMarks twining and diffing mechanisms are adopted.
Although multiple writer protocol was first introduced by Carter in Munin, this protocol is not
a new idea. It is derived from the idea of delay update in [11]. In [11], Bennett et al. pointed out
In order to avoid unnecessary synchronization that is not required by the pro-
gram's semantics. When a thread modifies a shared object, we can delay sending
the update to remote copies of the object until remote threads could otherwise
indirectly detect that the object has been modified.
This is original description about delay update protocol. It is very vague for understanding, for
example, what "indirectly" means? Whether two or more processors can modify the same unit si-
multaneously? If we assign different understanding for this idea, we will obtain different coherence
protocols. For example, if two or more processor 2 are allowed to modify the same page simultane-
ously, the delay update protocol will evolve multiple writer protocol. If at any time there is only
one writer is allowed to exist, and multiple reader can coexisted with single writer, the delay update
protocol evolves relaxed single writer protocol. This protocol has been proved very useful in some
applications [26]. The implementation of these protocol will be shown in the next subsection.
Multiple writer protocol has several benefits as follows:
multiple nodes can update the same page simultaneously, the protocol can greatly reduce
the protocol overhead due to false sharing.
ffl The protocol can reduce the communication traffic due to data transfer: instead of transfering
the whole page each time, it transfers diffs only.
ffl The protocol can relaxed the ordering of events in an interval further since without the ownership
is required before a write operation.
3.3 Interaction between LRC and its corresponding Protocol
According to above description of LRC and multiple-writer protocol, we find that LRC and multiple
writer protocol are closely related. We will analyze the relationship between them step by step,
from the strict single writer protocol to the most relaxed multiple writer protocol.
If LRC uses the strict singer-writer single-reader protocol which is adopted in release consistency,
i.e., before one processor writes a cache block, it must obtain the ownership first, and when other
processors receive invalidate message, they must invalidate their local pages immediately. Therefore,
at release operation, no write notices are needed to record because that all other processors have
already known the modification executed in this processor which will release the lock. When other
processor executes an acquire operation in the following, no write notices are needed since all the
pages should be invalidated in lazy release consistency have been invalidated already. As such, the
lazy property can not be exploited at all. In this case, the write notices and vector timestamps
are only used to help faulting processors to locate the valided page, just like the role of "home" in
CC-NUMA machine. Furthermore, the write-write and write-read false sharing cannot be resolved.
In this case, the performance of LRC will be similar to that of RC.
If the relaxed single writer protocol (i.e., singer-writer multiple-reader protocol used in [26]) is
adopted, when one processor receives invalidate messages, it doesn't invalidate those pages immedi-
ately, on the contrary, it keeps its own copies appear valid to itself until the next acquire operation.
2 In this paper, we assume that each processor has only one thread, therefore, we use thread, process, and processor
interchangely.
In this case, the write-read false sharing is eliminated partially, while write-write false sharing remains
too. In order to depict this protocol accurately, we must introduce a new state stale, which
is used to represent the state of a page during the interval between receiving an invalidate message
about this page and next acquire operation. The messages received at an acquire operation in this
coherence protocol include 3 categories :(1)necessary messages, (2) unnecessary messages entailed
by false sharing, (3) unrelated messages about other pages which will be not used by the requiring
processor. For example, in Fig 1, p1, p2 are two processors, variable x0,x1,x2,x3,x4 are 5 shared
data, where x0 and x2 are allocated on the page 0, x1 and x3 are allocated on the page 1, x4
is allocated on page 2 solely. Lock L0, L2 are 2 locks used by users to create critical sections to
protect the use of shared data. With relaxed single writer protocol, when p1 wants to write x1, it
must obtain the ownership before write operation and the procedure of invalidating other copies
can be overlapped with other operations following this write operation. When p2 receives the invalidate
message, it doesn't invalidate page 1 immediately, and keeps it appear to valid until next
lock acquiring operation, therefore, at the time when p2 read x3, it doesn't cause page fault error,
this read-write false sharing case can be eliminated. However, when p2 wants to read x3 after an
acquire operation, it will cause a page fault error. Therefore, the read-write false sharing problem
does not be eliminated completely in relaxed single writer protocol. On the other hand, when p2
writes x2, it must cause a write fault since only one writer is allowed to write a page at a given
time. As such, the write-write false sharing problem remains. In fig 1, the message 1 and 2 belong
to both first category and second category, while message 3 is unrelated message, message 4 and 5
are the unnecessary messages entailed by false sharing.
In above example, if the strict single-writer protocol is used , the page 0 in p2 will be invalidated
immediately, and the first read of x3 will cause false sharing. So, we find that the relaxed single
writer protocol is better than strict single writer protocol. On the other hand, in relaxed single
writer protocol, when page fault occurs, the whole page will be transfered, which results in a great
message traffic. In order to solve false sharing problem completely and reduce large communication
traffic, multiple writer protocol is proposed and widely used in software DSM systems.
In traditional MW protocol, however, when the acquiring processor receives the write notices,
it will invalidate its corresponding pages immediately, which results in write-read false sharing
as entailed by strict single writer protocol. Therefore, we improve the multiple writer protocol
by combining the traditional MW protocol and relaxed single writer protocol. When improved
multiple writer coherence protocol is adopted, the advantages of LRC can be exploited completely.
Almost all the false sharing effects will be eliminated. Furthermore, with improved MW protocol,
the write operations within one interval can forward without waiting the ownership. In this case,
the messages received at an acquire operation include:(1)necessary message(such as write notices)
, (2) some unrelated messages about other pages which will be not used by requiring processor,
such as invalidate message for page 2 in Fig 1. These unrelated messages will be solved in Entry
consistency and our new NLLS consistency model[40]. This is beyond the scope of this paper. In
multiple writer protocol, since two or more writers can modify the same page simultaneously, the
state of each page will be more complex than above two protocols. For example, when two writers
write the same page, whch one is the owner? When a third processor want to write a page, to whom
it should to inform? Diffing and twinning mechanism only tell us the implementation method for
certain protocol, however, its don't tell us how to maintain the state transition in multiple writer
protocol. We will describe the state transition in next subsection.
From above analysis, we draw the following conclusions as shown in Table 1. For completeness,
page 0
page 1 page 2
x3
Invalidate page 1
invalidate page 2
Invalidate page 0
release (L0)
acquire (L2)24ask for ownership
send ownership
no page fault cccur
page fault occur
page fault occur
Figure
1: An example of write-invalidate-based relaxed single writer protocol.
we list the strictest sequential consistency as the base for comparison.
memory consistency model corresponding coherence protocol
sequential consistency (1)require all memory operations atomically, (2)send and receive
invalidate messages immediately, and(3)stop until the
acknowledgement is received.
release consistency (1)the atomicity demand relaxed, but the ownership must
obtain before write, (2)send and receive invalidate message
immediately, and (3)the receiving of acknowledgement can be
delayed until the following release synchronization operation
LRC(single writer) (1)the atomicity demand relaxed, but the ownership must
obtain immediately before write, (2)send invalidate message,
the receiver delay accept the invalidate message until next
acquire synchronization operation, and (3)the receiving of
acknowledgement can be delayed until the following release
synchronization operation
LRC(multiple writer) (1)the atomicity demand was relaxed further, no ownership is
needed before write(if this processor has already had a copy
of this page), (2)both send and receive of invalidate messages
can be delayed, and(3)the receiving of acknowledgement
can be delayed until the following release synchronization
operation
Table
1: The relationship between memory consistency model and coherence protocol.
According to the performance comparisons presented by other researchers[8],[36][26] and our
analysis shown in Table 1, we find that the advantages of consistency models depend closely on
their corresponding coherence protocols. The more relaxed consistency model is, the more relaxed
coherence protocol needed to support it. This conclusion is very useful when we design a new
consistency model. For example, with the support of MW protocol, scope consistency [24] is more
relaxed than LRC, It combines the advantages of EC and LRC, distinguishes different Locks, i.e.,
the acquiring processor obtains the modified data from the releasers which use the same Lock.
Therefore, many useless messages in above three protocols can be reduced greatly 3 .
3.4 The State Transition for Invalidate-based Multiple Writer Protocol
As described above, in order to depict the coherence transition for relaxed single writer protocol
and improved multiple writer protocol, we must add a new state named stale, which means that
this coherence unit is modified by other processors, however, it appears to valid for this processor
itself. Fig 2 shows a state transition graph for write-invalidated-based multiple writer protocol for
LRC. As far as we aware, this is the first time a whole state transition graph for multiple writer
protocol is shown.
Exclusive shared
Invalid stale
Ri,
Wj, Rj
Ri,Rj.W,
Ri: Read from local processor
Wi: write from local processor
Rj: Read from remote processor
Acquire(l):Acquire lock
Release (l): Release lock l.
Wj: Write from remote processor
Create DIFFi
(create DIFFi and send it to exclusive node)
Wi(create Twin,. keep write notices)
Figure
2: The state transition graph for write-invalidated-based multiple writer protocol for LRC.
3 Although in [24], the authors didn't tell us the multiple writer protocol is adopted, from the examples shown in
that paper we deduce that scope consistency uses multiple writer protocol too.
4 Related Works
Dubios et.al in [17] analyzed the relationship between synchronization, coherence and event order-
ing. Although they separated the concepts of coherence and event ordering, they didn't present
the relationship between them, and they equalized event ordering with memory consistency model,
which is different from our viewpoint. They defined strong ordering and weak ordering in that paper
and presented that strong ordering is the same as sequential consistency. In fact, this viewpoint is
wrong because the cache coherence protocol is not considered, Adve and Hill gave an example in
[4] to demonstrate this wrong case.
Per Stenstrom in 1990 presented an excellent survey about cache coherence protocols[41]. Adve
and Gharachorloo discuss an extensive survey about memory consistency models[2]. However,
both of them considered the case for hardware DSM systems only. In our paper, we consider
both hardware and software DSM systems together, and study the relationship between coherence
protocol and memory consistency model.
Recently, Zhou et.al discussed the relationship between relaxed consistency model and coherence
granularity in DSM systems[44]. They only consider the granularity of coherence protocol, while
never consider the coherence protocol.
Dubios et.al in [18] proposed delay consistency model for a release consistent system where an
invalidation is buffered at the receiving processor until a subsequent acquire is executed by the
processor. The delay consistency model is a coherence protocol which includes two categories:delay
receive and delay-send delay receive, where delay receive is the same as relaxed single writer protocol,
delay send and delay receive is similar to multiple writer protocol. They presented the state
transition for hardware shared memory in detail. In this paper we describe the state transition for
software DSM systems for the first time.
5 Conclusion and Future Work
In this paper, starting with a classical coherent memory scheme, we point out that memory coherent
includes two issues:coherency and event ordering in each processor. Based on a clear description
of these two concepts, we define a general definition about memory consistency model, which is
the logical sum of coherence protocol and event ordering in each processor. Second, we analyze
the consistency models used in hardware DSM systems and software DSM systems under our new
definition. We point out that in hardware DSM systems, the relaxed consistency model is devoted
to relax the ordering of events, such as W!R, W!W, R!R, R!W operations, to utilize the
hardware and complier optimization techniques. While coherence protocol does not make much
progress in the past years. In software DSM systems, the main obstacle to performance is high
communication overhead and the useless coherence-related messages entailed by the large coherence
granularity. Therefore, the main purpose of the consistency model in software DSM system is to
reduce the number of message and message traffic. while the event ordering in each processor of
all new consistency models are similar to that of release consistency,
Third, taking the LRC as an example, we analyze the relationship between coherence protocol
and consistency model in software DSM systems, and conclude that these two issues are closed
related. The more relaxed consistency model is, the more relaxed coherence protocol needed to
support it. This conclusion is very useful when we design a new consistency model.
Fourth, we make some improvements on traditional multiple writer protocol by adding a new
state, and describe the state transition graph for invalidate-based multiple writer protocol for the
first time.
Finally, based on the analysis in this paper, we propose that the main directions of memory
consistency model research in the future as following:
For hardware DSM systems:
ffl Relaxing the coherence protocol further, such as allowing multiple writers in hardware-
coherent DSM system. This is possible because of the much progress of semi-conductor
technology[37].
ffl Considering the hybrid hierarchical DSM MPP system, where each node uses hardware to
implement DSM , while shared memory abstraction among nodes is supported by software
DSM.
For software DSM systems:
ffl Relaxing coherence protocol further.
ffl Using hybrid coherence protocols for different shared data, such as shared data protected by
locks and shared data protected by barriers.
ffl Intergrating more memory consistency model together to support different applications.
ffl Considering the interaction with other latency tolerate techniques, such as multithreading
and prefetching.
--R
A Comparison of Entry Consistency and Lazy Release Consistency Implementations.
Shared Memory Consistency Models: A Tutorial.
Weak Ordering:A new definition.
Implementing Sequential Consistency In Cache Based Systems.
The MIT Alewife Machine: Architecture and Performance.
The Affinity Entry Consistency Protocol.
TreadMarks: Shared Memory Computing on Networks of Workstations.
Software DSM Protocols that Adapt between Single Writer and Multiple Writer.
Tradeoffs between False Sharing and Aggregation in Software Distributed Shared Memory.
Larry Rudolph and Arvind.
Munin: Distributed Shared Memory Based on Type-Specific Memory Coherence
The Midway Distributed Shared Memory System.
Implementation and Performance of Munin.
A New Solution to Coherence Problems in Multicache Systems.
Parallel Computer Architecture (alpha version).
Memory Access Buffering in Multiprocessors.
Jin Chin Wang
TreadMarks Distributed Shared Memory on standard workstations and operating systems.
Programming for Different Memory Models.
Memory consistency and event ordering in scalable shared memory multiprocessors.
Cache consistency and sequential consistency.
Improving Release-Consistent Shared Virtual Memory using Automatic Update
Understanding Application performance on Shared Virtual Memory systems.
The Relative Importance of Concurrent Writes and Weak Consistency Mod- els
Lazy Release Consistency for software Distributed Shared Memory.
Portable Distributed Shared Memory on UNIX.
Lazy Release Consistency for Hardware-Coherent Multiprocessor
The Stanford FLASH Multiprocessor.
How to Make a Multiprocessors Computer That Correctly Executes Multiprocessor Programs.
IVY:A Shared Virtual Memory System for Parallel Computing.
The Standard Dash Multiprocessor.
ADSM: A hybrid DSM Protocol that Efficiently Adapts to Sharing Patterns.
An Evaluation of Memory Consistency Models for Shared Memory Systems with ILP Processors.
Intelligent RAM (IRAM): Chips that Remember and Compute Revised
Computer Architecture: A Quantitative Approach.
Correct Memory Operation of Cache-Based Mul- tiprocessors
Memory Consistency Models for Distributed Shared Memory Systems
A Survey of Cache Coherence Schemes for Multiprocessors.
The SPLASH-2 Programs: Characterization and Methodological Considerations
Performance Evaluation of Two Home-based Lazy Release Consistency Protocols for Shared virtual Memory Systems
--TR
--CTR
optimization and integration in DSM, ACM SIGOPS Operating Systems Review, v.34 n.3, p.29-39, July 2000 | event ordering;software DSM systems;memory consistency models;coherence protocol;hardware DSM systems |
271188 | Analysis and Reduction for Angle Calculation Using the CORDIC Algorithm. | AbstractIn this paper, we consider the errors appearing in angle computations with the CORDIC algorithm (circular and hyperbolic coordinate systems) using fixed-point arithmetic. We include errors arising not only from the finite number of iterations and the finite width of the data path, but also from the finite number of bits of the input. We show that this last contribution is significant when both operands are small and that the error is acceptable only if an input normalization stage is included, making unsatisfactory other previous proposals to reduce the error. We propose a method based on the prescaling of the input operands and a modified CORDIC recurrence and show that it is a suitable alternative to the input normalization with a smaller hardware cost. This solution can also be used in pipelined architectures with redundant carry-save arithmetic. | INTRODUCTION
The CORDIC (Coordinate Rotation DIgital Computer) algorithm is an iterative technique that
permits computing several transcendental functions using only additions and shifts operations
[15] [16]. Among these functions are included trigonometric functions, like sine, cosine, tan-
gent, arctangent and module of a vector, hyperbolic functions, like sinh, cosh, tanh, arctanh,
and several arithmetic functions. Due to the simplicity of its hardware implementation several
signal processing algorithms, such as digital filters, orthogonal transforms and matrix factorization
have been formulated with Cordic arithmetic and, therefore, several Cordic-based VLSI
architectures have been proposed to solve these related signal processing problems [7].
In applications requiring high speed, pipelining and/or redundant arithmetic are introduced
in the implementation of the Cordic algorithm, in such a way that each iteration of the
algorithm is evaluated in a different stage of the pipeline. On the other hand, carry ripple
adders are replaced by redundant adders, carry save (CS) or signed digit (SD), where the carry
propagation within the adders is eliminated [4] [11] [13] [14]. As the control of the algorithm
requires the exact determination of the sign of some variable, a Modified Cordic algorithm
which facilitates the determination of the sign with redundant arithmetic has been proposed
[5].
analysis of the Cordic algorithm is fundamental for the efficient design of Cordic-
based architectures. To achieve a good performance, it is important to know the behaviour of
the error and to take into account the effect that it will have on the hardware implementation
to obtain a specified accuracy. The different sources of error of the Cordic algorithm have been
analyzed in detail [8] [9] [10].
In this paper we will focus on the errors appearing in the computation of the inverse tangent
function (angle calculation) with fixed-point arithmetic. This operation can be computed
with the Cordic algorithm in the vectoring mode and does not require the final scaling inherent
to the Cordic algorithm, since no scale factor is introduced in this mode of operation. The
angle calculation is useful in algorithms for matrix factorization like eigenvalue and singular
value decomposition (SVD) [6] and in the implementation of some digital filters [1] [12]. Matrix
factorization requires the angle computation in the circular coordinate system, while digital
filters may need angle computation in both circular and hyperbolic coordinates.
The rounding error that is accumulated in the control coordinate through all the iterations
of the Cordic algorithm may result in large error in the evaluation of the inverse tangent. This
error can be important in applications such the Cordic based SVD algorithm, where the inverse
tangent function is evaluated using fixed-point format with unnormalized data [10], or in some
kind of filters where hyperbolic inverse tangent need to be computed [1] [12]. In [10] a technique,
called partial normalization, is proposed to bound this error. However, this technique is hard
to implement with redundant arithmetic and does not take into consideration the initial error
due to the rounding of the input operands.
We perform an analysis of the error considering the following three components:
ffl Error due to the rounding of the input data
ffl Error due to the finite number of iterations
ffl Error due to the finite datapath width.
We show that the first of these components, which has not been considered in previous
analyses, is significant when both input operands are small. As a consequence, the solution
proposed in [10] might not be appropriate.
Because of the above, for small input operands it seems that the only suitable solution is
to perform a normalization of the input operands which includes additional bits. We present a
solution that performs a prescaling of the operands and modifies the CORDIC recurrence [5].
We show that this solution is simpler than normalization and produces a smaller total error.
The Cordic algorithm consists in the rotation of a vector in a circular or hyperbolic coordinate
system. The rotation is performed by decomposing the angle into a sequence of preselected
elementary angles. Specifically, the basic Cordic iteration or microrotation is [15] [16]
where m is an integer taking values +1 or \Gamma1 for circular or hyperbolic coordinates, respectively,
are the variables x, y and z before the microrotation,
tan
is the microrotation angle. The shifting sequence s(m; i) in a circular
coordinate system is s(1; in a hyperbolic coordinate system a sequence such as
may be chosen, starting with microrotations have to be repeated.
The Cordic iterations in coordinates x and y can be rewritten in matrix notation as,
being the input vector at iteration i, v[i the output vector
and
Depending on parameter m, the Cordic algorithm may evaluate trigonometric functions
selecting
the y coordinate is reduced to zero. This permits evaluating the angle of a vector that is
accumulated in z n+1 , in such a way that z coordinates and
z coordinates.
In the following we derive numerical bounds for the overall error in the computation of the
inverse tangent function in the circular,
coordinate systems. In these cases, the y coordinate is reduced to zero by chosing according
to equation (4) and the angle computed, ', is stored in variable z. We follow the notation
introduced in [8] and [10].
The accuracy of the Cordic algorithm and its influence on processor design was first
analyzed by Walther [16]. He concluded that, to obtain a precision of n bits, data paths with
bits are needed. Later, more precise bounds in all the modes of the Cordic algorithm
have been obtained [8] [9] [10].
Hu [8] performs a thorough analysis of several errors in all the modes of the Cordic
algorithm and derives numerical error bounds for each type of error. However, no results on the
inverse tangent and inverse hyperbolic tangent computations are included. In [10] it has been
shown that the numerical errors in inverse tangent computation using fixed point arithmetic
with small operands can be too large.
The overall error is split into two components, an approximation error and a rounding
error. The approximation error is due to the angle quantization. That is, the decomposition of
the angle into a finite number of microrotation angles produces an error in the representation
of the angle. On the other hand, the rounding error is caused by the finite word length of any
data path.
If we denote as ideal Cordic the mathematically defined Cordic algorithm, with infinite
precision in the data path and infinite number of microrotations, and as real Cordic the practical
implementation of the Cordic algorithm, with finite precision in the data path and finite number
of microrotations, it is possible to define an intermediate Cordic that uses infinite precision
arithmetic and the same number of microrotations as the real Cordic. Figure 1 shows the
relationship among the three definitions. In this way, the approximation error is the difference
between the output of the ideal Cordic and the output of the intermediate Cordic. The rounding
error is the difference between the output of the intermediate Cordic and that of the real Cordic.
Usually, the error due to the rounding of the input operands has been neglected. That
is, it has been considered that there are no rounding errors in the input data. But often, the
real situation is that the input data is obtained from another hardware module or from the
rounding of variables with larger precision. In such cases, there is as initial rounding error in
the input data that has to be considered when deriving the bounds. This source of error can
become very important in applications involving small inputs.
We consider fixed-point arithmetic with n fractional bits in the input operands and b
fractional bits in the representation of the x, y and z data paths inside the Cordic, to obtain
n-bit results. The wordlengths used are illustrated in figure 2. This way, the initial rounding
error in the input operands is and the rounding error introduced in
each microrotation is
For circular coordinates the value of coordinate y obtained with the intermediate Cordic
after microrotations is
being ff the angle calculated with the intermediate Cordic. On the other hand, the angle '
computed by the ideal Cordic is given by
Then, the approximation error is [10]
being jv[0]j the module of the input vector
On the other hand,
jy
y n+1 is the value of y n+1 with finite precision and f c
n+1 is the rounding error in coordinate
y after microrotations. For convergence,
j"y
Then, considering the rounding error in the z datapath ((n conclude that
the angle error is bounded by,
Now, we have to find a bound for f c
n+1 . Following the derivation in [8], the rounding error
is bounded by,
Y
Y
where P 1 (i) is given by equation (3) and k \Delta k is the l 2 norm, defined as the square root of
the largest eigenvalue of the matrix [8]. The rounding error is composed of two parts: the
rounding error produced by the initial rounding error in the input operands (first term), and
the rounding error accumulated in n+ 1 microrotations considering no initial rounding error in
the input data (second and third terms). Therefore,
where
\Gamman is due to the initial rounding error (first term of equation (11)) and
1:5 is due to the accumulated rounding error (second and third terms of equation
(11)).
Then, replacing equation (12) in equation (10) the bound for the overall error in the
computation of the inverse tangent function is obtained,
\Gamman
1:5
Equation 13 shows that the error in the computation of the inverse tangent does not have
a constant bound as it depends on the norm of the input vector, jv[0]j. The error becomes
larger the smaller the norm of the input vector, in such a way that when x 0 and y 0 are close
to zero, a large error results. Consequently, the error is not bounded if the input operands are
not bounded.
A similar equation may be obtained for the overall error of the hyperbolic vectoring
being m the number of Cordic iterations in hyperbolic coordinates [16] and ' max is the maximum
input angle. In this case, the error becomes large when the hyperbolic norm of the input vector,
0 , is small.
By means of the input normalization, the error can be bounded since a lower bound on
jv[0]j is enforced. However, the implementation of the normalization requires extra hardware to
determine the amount by which the components of the vector can be shifted, two leading-zero
encoders and a comparator, and barrel shifters to perform the shifts in a single cycle. Therefore,
the normalization is very hardware consuming.
In [10] an alternative solution is proposed for the circular vectoring, the partial normal-
ization, that involves the modification of the Cordic unit to include a normalization step, which
is integrated with the Cordic iterations. This solution bounds the error. The main drawbacks
of this solution are that the initial rounding error is not considered and it is very difficult to
implement efficiently with redundant arithmetic.
Figure
3 illustrates the partial normalization. As the input normalization is distributed
along the Cordic iterations, the normalization is performed by introducing zeros, and not real
bits, in the least significant positions of the input data. That is, when the input data is known
with a precision larger than the precision used in the Cordic iteration, b bits, only b bits of the
input are considered for normalization and the extra bits of the input are ignored, resulting in
a large error not considered in the analysis performed in [10].
Figure
4 shows the error produced with the partial normalization when an initial rounding
error is considered, the error with the conventional Cordic algorithm and the error produced
using the Cordic with input normalization. In this latter case, the error considering normalization
introducing zeros and normalization with real bits are considered. The figure plots the
error, expressed as the precision obtained in the angle, versus the module of the input vector.
Although the error is lower than the error produced with the standard Cordic algorithm, it is
still very significant for small inputs, and higher than the error of the Cordic algorithm with
input normalization, because the partial normalization is performed introducing zeros.
On the other hand, the microrotations are modifed to include the normalization, resulting
in microrotations that include comparisons to choose the maximum and minimum of two variables
and variable shifts to perform the normalization. Therefore, this solution is not adequate
for redundant arithmetic and/or pipelined architectures.
In the next sections, new approaches are developed to bound the error of the angle calcu-
lation. These approaches are suitable for word-serial or pipelined architectures with redundant
or non-redundant arithmetic, and require little extra hardware cost.
4 MODIFIED CORDIC ALGORITHM
The introduction of redundant arithmetic in the angle computation with the Cordic algorithm
has motivated the development of modified Cordic microrotations for the circular coordinate
system [5], where the recurrences are transformed making
Then the microrotations (equation (1)) are transformed into
and the selection of the ffi i is performed according to the following equation
Figure
5 illustrates the modified Cordic algorithm. The w coordinate is not reduced to
zero, because of the left shift that is performed over this coordinate at each microrotation.
This transformation facilitates the implementation of the Cordic algorithm with redundant
arithmetic. In redundant arithmetic, CSA or SDA, the exact determination of the sign of
coordinate y is time consuming because of the redundant representation of this coordinate.
With this transformation it is possible to use an estimate of the redundant representation of
w i in the determination of ffi i , instead of its fully assimilated value. To make this possible
it is necessary to use a redundant representation of the angle, allowing ffi i to take values in
the set \Gamma1; 0:1. The corresponding selection functions for using carry-save or signed-digit
representations can be found in [5]. Moreover, the hardware is reduced since one of the shifters
is eliminated.
In this work, we propose this same change in the Cordic equations but with the aim of
reducing the errors in the computation of the inverse tangent function. In the following, we
obtain the error bounds of the angle computation with the modified Cordic algorithm, based
on variable w, in the circular and hyperbolic coordinates. We take into account the initial
rounding error in the input operands.
4.1 Angle Error Analysis in Circular Coordinate System
The modified Cordic iterations in circular coordinate system, given by equation (16), can be
rewritten as
being the input vector, v[i the output vector and Pw;1 (i) the
transformation matrix
Following a similar derivation as for the standard Cordic algorithm, it is possible to find
a bound for the overall error. This way, the value of coordinate w after microrotations is
being ff the angle calculated with the intermediate Cordic. Then, equation (20) can be rewritten
as,
where ' is the angle of vector v[0]. Therefore,
and the approximation error is
Then, considering the rounding error in the z datapath, the angle error with the modified
algorithm in circular coordinates is bounded by,
w;n+1 is the rounding error in coordinate w after
Now, we have to find a bound for f c
w;n+1 . We can use equation (11), but considering that
the transformation matrix is given by equation 19,
Y
Y
Moreover, the rounding error introduced in each microrotation of the modified Cordic algorithm
is bounded by that is, no rounding error is introduced in coordinate w, as it
is multiplied by a factor of 2 each microrotation and there are no right shifts.
To evaluate the l 2 norm of the matrix product, k
i=j Pw;1 (i) k, the following relation [2]
has to be taken into account n
Y
Y
where
and
As k
the rounding error in the modified Cordic algorithm is bounded by
being the first term the contribution of the initial rounding error and the second one the
accumulated error in the microrotations.
Replacing this result in equation (24) the error in the computation of the angle with the
modified algorithm is obtained,
As it can be seen by comparing equations (13) and (30), there is a reduction in the error
of the computation of the angle, tan using the modified Cordic iterations. This
is due, mainly, to the elimination of the rounding error in the w coordinate, that results in
a lower accumulated rounding error after n iterations. Figure 6a shows the errors observed
for several initial values with the standard and the modified Cordic algorithms, with
and several different values of b. In both cases, the error becomes more important as jv[0]j
decreases, although the error with the standard Cordic is always slighty larger than the error
with the modified Cordic.
However, although the diference between the errors with the standard and the modified
algorithms is small, the modified Cordic algorithm results in a simpler hardware imple-
mentation, since one shifter is eliminated, and is suitable for implementations with redundant
arithmetic.
4.2 Angle Error Analysis in Hyperbolic Coordinate System
The analysis that has been developed in the previous section can be extended to the evaluation
of the error in the angle computation in hyperbolic coordinates. Now, the function we are
calculating is tanh iteration i=0 is not evaluated. Similarly to circular coordinates,
we can define the modified Cordic algorithm, by means of the transformation given in equation
(15). Then, the modified microrotation is
Similarly to the circular coordinate case, we find that the quantization error is given by
Therefore, to obtain a bound in the error of the angle calculation, we have to find a bound
for the accumulated rounding error, f h
w;n+1 . We use equation (25), but considering that the
transformation matrix, Pw;\Gamma1 (i), is
Similarly to the case of circular coordinate system, to evaluate k
into account equation (26) with matrix A(nj) defined as in equation (27) and matrix Q(ij)
defined as
where the values of q 0 and q 1 depend on the type of microrotation performed as follows
if is a repetition and
is a repetition and i ? j
Therefore, the l 2 norm of this matrix is
if it is a repetition and
This way, the rounding error in hyperbolic coordinates is obtained as
Then, considering the rounding error of the z data path, the overall error in the hyperbolic
angle computation with the modified Cordic algorithm is
The errors of the standard and modified Cordic algorithms operating in hyperbolic vectoring
with are shown in figure 6b. As in the circular vectoring mode of operation, the
error is reduced by means of the utilization of the modified Cordic algorithm as the rounding
error in the w coordinate is eliminated.
On the other hand, from the observation of figure 6, we find that the error in hyperbolic
vectoring is larger than in circular vectoring. The hyperbolic coordinate scale factor, K h , is less
than unity and reduces the operands and the hyperbolic module of the vector decreases with
the mapping of the vector over the x-axis. Therefore, the rounding error is more important in
hyperbolic coordinates than in circular coordinates.
According to equation (38), the hyperbolic vectoring error would be large even with
normalized inputs, when x 1 and y 1 are similar. However, the range of convergence of the
algorithm imposes a limit to the value of x
considering the basic sequence of microrotations of the Cordic algorithm for hyperbolic coordinates
[16], which results in ' Therefore, the error will only be significant when the
inputs operands are small.
Although the modified Cordic algorithm reduces the overall error in the angle computa-
tion, both in circular and in hyperbolic coordinate system, this error is still unbounded and it
becomes very important when the module of the input vector is small. The smaller the module
of the input vector, the larger the error. Therefore, it is necessary to develop solutions that
efficiently reduce the error, with a low hardware and timing cost.
The error in the angle calculation with the modified Cordic algorithm is still unbounded and
large when the module of the input vector is small. The more natural solution to this problem
is the normalization of the input operands, however this requires two leading-zeros coders, a
comparator and two barrel shifters. That is, it is very hardware consuming.
We propose a solution for the minimization of the angle calculation error with the modified
algorithm in circular and hyperbolic coordinates, both in non-redundant and redundant
arithmetic. We develop a solution based in an operand pre-scaling, where the error is bounded
and close to the precision of the algorithm. The hardware implementation is simpler than the
implementation of the partial normalization technique and the standard input normalization.
Moreover, unlike to the solution developed in [10], this solution may applied to the Cordic
algorithm with redundant arithmetic.
The angle calculation error is only important when the module of the input vector is
small. Therefore, if the input vector module is forced to take large values, the angle error is
reduced and the output of the Cordic algorithm is within the precision required. The operand
pre-scaling technique multiplies the module of the input vector by a constant factor, in such
a way that the resulting module is large enough to minimize the angle calculation error. The
pre-scaling is carried out before starting the Cordic iterations, thus being a preprocessing stage.
In the following we consider that, to perform the operand pre-scaling, b bits of the input
operands are known, where b is the internal wordlength of the Cordic and the p least significant
bits are used in the pre-scaling to shift to the right the input vectors, as shown in figure 7.
As it has been shown before, with the solution presented in [10] is not possible to perform the
normalization considering additional bits of the input.
Taking into account these considerations, the pre-scaling is as follows
being (x in ; w in ) the input variables and the inputs to the Cordic processor after the
pre-scaling. Then,
the module of the input vector is multiplied by a scaling factor M . This way, the error in the
computation of the inverse tangent function is reduced by an important factor, because we are
reducing the rounding error and we are also imposing a lower bound for v in of M \Delta 2 \Gamman .
The pre-scaling should only be carried out when the input vector module is small. There-
fore, M is defined as,
1 if jx in j 2 \Gammas or jw in j 2 \Gammas
That is, the pre-scaling is only performed if the module of the input vector is lower than 2 \Gammas .
In this case, we multiply the input vector times 2 s to obtain a large module. The value of s
should be chosen in such a way that the error is minimized. That is, the error in the angle
computation of an input vector with module jv in j 2 \Gammas must already be bounded and close to
the precision of the algorithm.
This way, the maximum overall error for the angle computation in circular coordinates
with pre-scaling is obtained by replacing jv[0]j in equation
considering in this later case that the minimum input vector module is jv in
This results in
sin
sin
K1 \Delta2 \Gamma(16\Gammas)
Figure
8 shows the error with several different pre-scalings, corresponding to
pre-scaling), 9. The precision of the algorithm is and the internal
precision of the modified Cordic arquitecture is Moreover, the input operands are
rounded to bits after the pre-scaling; that is, the pre-scaling is carried out considering
that, at least, b bits of the input are known. While the module of the input vector is
less than 2 \Gammas , since the module is not modified, the error is the same as without pre-scaling
when the module is larger than 2 \Gammas , that is, the pre-scaling is carried out, the
error is significantly reduced, since the module has been enlarged with the pre-scaling. Similar
results can be obtained for hyperbolic coordinate system.
However, the error in the angle, although has been reduced, is still large in vectors with
module jv in j ! 2 \Gamma2s . For example, when the pre-scaling is performed with the error
is large for input vectors with a small module, less than, approximately 2 \Gamma10 . On the other
hand, when the scaling factor is large, some input vectors with module less than 2 \Gammas present
an important error. For example, with pre-scaling the error is large when the module is
between 2 \Gamma6 and 2 \Gamma9 , since the pre-scaling has not been carried out.
From expression (43) and the illustration of Figure 8, it can be seen that there is no single
value of s that produces an acceptable error for the whole range of jv[0]j. This can be achieved
by a double pre-scaling in which two different scaling factors are used
so that
1 if jx in j 2 \Gammas1 or jw in j 2 \Gammas1
with s1 s2.
Figure
9 shows the error for circular and hyperbolic coordinates, respectively, considering
double pre-scaling with 9. This way, the maximum error has been reduced
to, approximately, 2 \Gamma15 , that is very close to the precision of the algorithm, for every input
vector module. Moreover, a common pre-scaling for hyperbolic and circular coordinates has
been used, which facilitates the VLSI implementation of the modified Cordic architecture.
Figure
compares the error in the angle calculation using the partial normalization
technique, described in [10], and the pre-scaling technique with double pre-scaling and modified
equations. The error with the pre-scaling technique is always lower than the error with
the partial normalization, because, as said before, the partial normalization can not use bits,
corresponding to precision less than 2 \Gammab , of the input to perform the normalization.
The pre-scaling can be extended to other precision, although for high
precision, n 32, it is necessary to consider at least three values for the scaling factor.
5.1 Pre-Scaling Technique with Non-Redundant Arithmetic
If the Cordic algorithm is implemented using non-redundant arithmetic, the hardware implementation
of the pre-scaling technique consists in the comparison of the module of the vector
to the scaling factors and the corresponding shifting of the input operands. The comparison of
the module to the scaling factor is performed according equation (44), that is, the two input
coordinates are compared to 2 \Gammas1 and 2 \Gammas2 , and the corresponding scaling factor is obtained.
As an example, the implementation of the double pre-scaling technique with
shown in figure 11. As it can be seen, it is necessary to include only a small number of
control gates and two rows of multiplexers, that are in charge of selecting the scaled or unscaled
operand. The first row of multiplexers performs the shift 2 5 , according to the result of checking
bits 0 to 5 of x in and w in , in such a way that the shift is carried out if these bits are equal to 1
or 0, that is, negative positive numbers less than 2 \Gamma5 . In the second pair of multiplexers an
additional shift of 2 4 is performed if bits 6 to 9 are also 1 or 0, performing in this case a total
shift of 2 9 .
5.2 Pre-Scaling Technique with Redundant Arithmetic
The implementation of the Cordic algorithm with redundant arithmetic requires the assimilation
of a certain number of bits of the w variable, to obtain an estimation of the sign, which is used
in the determination of the direction of each microrotation [5] [11] [13]. This can only be applied
to normalized data because the position of the more significant bit has to be known, in such a
way that if the data is not normalized it is necessary to perform a previous normalization.
In [3] a Cordic architecture that avoids the assimilation and checking of a certain number
of bits is proposed. This architecture is based on the More-Significant-Bit-First calculation
of the absolute value of the y coordinate and in the detection of magnitude changes in this
calculation. The calculation of the direction of the microrotation requires the propagation of
a "carry" from the most-significant to the least-significant bit of the y coordinate. For this
reason additional registers are needed for the skew of the data. An important characteristic of
this architecture is that the input data do not need to be normalized. The angle calculation
error is as described in previous sections. Although this architecture uses the y coordinate as
control variable, it can be modified to use variable w instead of the y coordinate, resulting in
an high-speed implementation of the modified Cordic algorithm.
On the other hand, the pre-scaling can be performed with the scheme shown in figure
11. This way, the pre-scaling technique may be incorporated as a pre-processing stage of this
architecture to obtain a pipelined redundant arithmetic Cordic which permits the computation
of the inverse tangent function in circular and hyperbolic coordinates without errors over
unnormalized data, without any need for performing an initial normalization.
5.3 Evaluation of the Pre-Scaling Technique
The hardware complexity of the double pre-scaling is lower than the complexity of a standard
normalization stage. Actually, the pre-scaling technique can be considered as an incomplete
normalization, where only two different shifts are possible.
On the other hand, this hardware complexity is also lower than the partial normalization
[10]. The partial normalization implies the introduction of additional hardware into the Cordic
architecture. If the x-y registers are b bits long, it is necessary to introduce two normalization
shift registers with a maximum shift of (m=2) 1=2 . In addition two leading-zero encoders which
operate over the (m=2) 1=2 most significant digits of coordinates x and y are required. An
increase in the clock cycle is produced because of the inclusion of two normalization barrel
shifters in the critical path of the word-serial implementation.
The solution we have presented, the pre-scaling technique, does not introduce additional
hardware in the Cordic processors, since the pre-scaling is a pre-processing stage. Moreover,
the utilization of variable w instead of coordinate y permits reducing the global hardware cost
of the Cordic architecture because one of the shifters is eliminated.
Finally, the pre-scaling technique can be used in pipelined processors with redundant
arithmetic, whereas the partial normalization technique is restricted to a word-serial architecture
with conventional non-redundant arithmetic. A pipelined architecture for the Cordic
algorithm with the partial normalization technique would need two barrel shifters in each stage
of the pipeline, in addition to the hardware implementation of the Cordic microrotations. The
implementation of the partial normalization with redundant arihtmetic is inefficient, since there
are several comparisons involved.
6 CONCLUSIONS
A thorough analysis of the errors appearing in the calculation of the inverse tangent and the
hyperbolic inverse tangent functions with the Cordic algorithm shows that large numerical
errors can result when the inputs are unnormnalized and the module of the input vector is
small.
We have shown, by means of an error analysis, that the utilization of a modified Cordic
algorithm based on the w iteration siginificantly reduces the numerical errors in the angle
computation with circular and hyperbolic coordinates. In this analysis the rounding error in
the input operands has been taken into account, error source neglected in some previous analyse
in the literature, and it has been shown that this error becomes very important in applications
involving unnormalized inputs.
On the other hand, we propose a solution to the problem, operand pre-scaling, that
results in a low cost VLSI implementation. The operand pre-scaling technique consists of a
pre-processing stage, before the cordic microrotations, where the the input vector is scaled by a
constant factor if the module is small enough to result in a large error in the angle computation.
This solution can be used in pipelined and word-serial cordic processors with redundant or non-redundant
arithmetic. Moreover, the pre-scaling can be chosen in such a way that the same
scale factor is used for circular and hyperbolic coordinate systems.
--R
"A VLSI Speech Analysis Chip Set Based on Square-Root Normalized Ladder Forms"
"Unnormalized Fixed-Point Cordic Arithmetic for SVD Processors"
"High Speed Bit-Level Pipelined Architectures for Redundant Cordic Implementations"
"The Cordic Algorithm: New Results for Fast VLSI Implementation"
"Redundant and On-Line Cordic: Application to Matrix Triangularization and SVD"
"Redundant and On-Line Cordic for Unitary Transforma- tions"
"Cordic-Based VLSI Architectures for Digital Signal Processing"
"The Quantization Effects of the Cordic Algorithm"
"A Neglected Error Source in the Cordic Algorithm"
"Numerical Accuracy and Hardware Tradeoffs for Cordic Arithmetic for Special-Purpose Processors"
"Constant-Factor Redundant Cordic for Angle Calculation and Rotation"
"2-Fold Normalized Square-Root Schur RLS Adaptive Filter"
"Redundant Cordic Methods with a Constant Scale Factor for Sine and Cosine Computation"
"Low Latency Time Cordic Algorithms"
"The Cordic Trigonometric Computing Technique"
"A Unified Algorithm for Elementary Functions"
--TR
--CTR
Toms Lang , Elisardo Antelo, CORDIC Vectoring with Arbitrary Target Value, IEEE Transactions on Computers, v.47 n.7, p.736-749, July 1998 | error analysis;operand normalization;angle computation;redundant arithmetic;CORDIC algorithm |
271426 | Parallel Cluster Identification for Multidimensional Lattices. | AbstractThe cluster identification problem is a variant of connected component labeling that arises in cluster algorithms for spin models in statistical physics. We present a multidimensional version of Belkhale and Banerjee's Quad algorithm for connected component labeling on distributed memory parallel computers. Our extension abstracts away extraneous spatial connectivity information in more than two dimensions, simplifying implementation for higher dimensionality. We identify two types of locality present in cluster configurations, and present optimizations to exploit locality for better performance. Performance results from 2D, 3D, and 4D Ising model simulations with Swendson-Wang dynamics show that the optimizations improve performance by 20-80 percent. | Introduction
The cluster identification problem is a variant of connected component labeling that arises
in cluster algorithms for spin models in statistical mechanics. In these applications, the
graph to be labeled is a d-dimensional hypercubic lattice of variables called spins, with edges
(bonds) that may exist between nearest-neighbor spins. A cluster of spins is a set of spins
defined by the transitive closure of the relation "is a bond between". Cluster algorithms
require the lattice to be labeled such that any two spins have the same label if and only if
they belong to the same cluster.
Since the cluster identification step is often the bottleneck in cluster spin model applica-
tions, it is a candidate for parallelization. However, implementation on a distributed memory
parallel computer is problematic since clusters may span the entire spatial domain, requiring
global information propagation. Furthermore, cluster configurations may be highly irregular,
preventing a priori analysis of communication and computation patterns. Parallel algorithms
for cluster identification must overcome these difficulties to achieve good performance.
We have developed a multidimensional extension of Belkhale and Banerjee's Quad algorithm
[1, 2], a 2D connected component labeling algorithm developed for VLSI circuit
extraction on a hypercube multiprocessor. This paper presents performance results from
applying the algorithm to Ising model simulations with Swendson-Wang dynamics [3] in
2D, 3D, and 4D. Our extension abstracts away extraneous spatial information so that distributed
data structures are managed in a dimension-independent manner. This strategy
considerably simplifies implementation in more than two dimensions. To our knowledge,
this implementation is the first parallelization of cluster identification in 4D.
To improve performance, we identify two types of locality present in Swendson-Wang
cluster configurations and present optimizations to exploit each type of locality. The optimizations
work with an abstract representation of the spatial connectivity information, so
they are no more complicated to implement in d ? 2 dimensions than in 2D. Performance
results show that the optimizations effectively exploit cluster locality, and can improve performance
by 20-80% for the multidimensional Quad algorithm.
The remainder of his paper proceeds as follows. Section 2 discusses previous approaches
to the cluster identification problem on parallel computers. Section 3 describes the Ising
model and the Swendson-Wang dynamics. Section 4 reviews Belkhale and Banerjee's Quad
algorithm and presents extensions for more than two dimensions. Section 5 presents two
optimizations to exploit cluster locality, and section 6 gives performance results in 2D, 3D,
and 4D.
Related Work
Several algorithms for 2D cluster identification on distributed memory MIMD computers
have been presented in recent years.
Flanigan and Tamayo present a relaxation algorithm for a block domain decomposition
[4]. In this method, neighboring processors compare cluster labels and iterate until a
steady state is reached. Baillie and Coddington consider a similar approach in their self-
labeling algorithm [5]. Both relaxation methods demonstrate reasonable scaleup for 2D
problems, but for critical cluster configurations the number of relaxation iterations grows
as the distance between the furthest two processors (2
P for block decompositions on P
processors). Other approaches similar to relaxation have been presented with strip decompositions
[6, 7]. Strip decompositions result in only two external surfaces per processor.
However, the distance between two processors can be as large as P , which increases the
number of stages to reach a steady state. Multigrid methods to accelerate the relaxation
algorithm for large clusters have been presented for SIMD architectures [8, 9].
Host-node algorithms involve communicating global connectivity information to a single
processor. This host processor labels the global graph and then communicates the results to
other processors. Host-node algorithms [10, 11, 5] do not scale to more than a few processors
since the serialized host process becomes a bottleneck.
Hierarchical methods for connected component labeling are characterized by a spatial
domain decomposition and propagation of global information in log P stages. Our approach
is based on the hierarchical Quad algorithm for VLSI circuit extraction on a hypercube
multiprocessor [1]. Other hierarchical methods for distributed memory computers have been
used for image component labeling [12, 13]. Baillie and Coddington consider a MIMD
hierarchical algorithm for the Ising model, but do not achieve good parallel efficiency [5].
Mino presents a hierarchical labeling algorithm for vector architectures [14].
There has been comparably little work evaluating MIMD cluster identification algorithms
in more than two dimensions. Bauernfeind et al. consider both relaxation and a host-
node approaches to the 3D problem [15]. They introduce the channel reduction and net list
optimizations to reduce communication and computation requirements in 3D. They conclude
that the host-node approach is inappropriate for 3D due to increased memory requirements
on the host node.
Fink et al. present 2D and 3D results from a preliminary implementation of the multidimensional
Quad algorithm [2]. This paper includes 4D results and introduces issues
pertaining to a dimension-independent implementation.
Ising Model
Many physical systems such as binary fluids, liquid and gas systems, and magnets exhibit
phase transitions. In order to understand these "critical phenomena," simple effective models
have been constructed in statistical mechanics. The simplest such model, the Ising model,
gives qualitative insights into the properties of phase transitions and sometimes can even
provide quantitative predictions for measurable physical quantities [16].
The Ising model can be solved exactly in 2D [17]. In more than two dimensions, exact
solutions are not known and numerical simulations are often used to obtain approximate
results. For example, numerical simulations of the 3D Ising model can be used to determine
properties of phase transitions in systems like binary liquids [18]. The 4D Ising model is
a prototype of a relativistic field theory and can be used to learn about non-perturbative
aspects, in particular phase transitions, of such theories [19].
In d dimensions, the Ising model consists of a d-dimensional lattice of variables (called
spins) that take discrete values of \Sigma1. Neighboring spins are coupled, with a coupling
strength - which is inversely proportional to the temperature T .
Monte Carlo simulations of the Ising model generate a sequence of spin configurations.
In traditional local-update Monte Carlo Ising model simulations, a spin's value may or may
not change depending on the values of its neighbors and a random variable [5]. Since each
spin update depends solely on local information, these algorithms map naturally onto a
distributed memory architecture.
The interesting physics arises from spin configurations in the critical region, where phase
transitions occur. In these configurations, neighboring spins form large clusters in which all
spins have the same value. Unfortunately, if - is the length over which spins are correlated
(the correlation length), then the number of iterations required to reach a statistically independent
configuration grows as - z . For local update schemes the value z (the dynamical
critical exponent) is z - 2. Thus, even for correlation lengths - as small as 10 to 100, critical
slowing-down severely limits the effectiveness of local-update algorithms for the Ising
model [20].
In order to avoid critical slowing-down, Swendson and Wang's cluster algorithm updates
whole regions of spins simultaneously [3]. This non-local update scheme generates
independent configurations in fewer iterations that the conventional algorithms. The cluster
algorithm has a much smaller value of z, often approaching 0. Therefore, it eliminates critical
slowing-down completely. The Swendson-Wang cluster algorithm proceeds as follows:
1. Compute bonds between spins. A bond exists with probability
adjacent spins with the same value.
2. Label clusters of spins, where clusters are defined by the transitive closure of the
relation "is a bond between".
3. Randomly assign all spins in each cluster a common spin value, \Sigma1.
These steps are repeated in each iteration.
On a distributed memory computer, a very large spin lattice must be partitioned spatially
across processors. With a block decomposition, step 1 is simple to parallelize, since we only
compute bonds between neighboring spins. Each processor must only communicate spins on
the boundaries to neighboring processors. The work in step 3 is proportional to the number
of clusters, which is typically much less than the number of lattice sites.
Step 2 is the bottleneck in the computation. A single cluster may span the entire lattice,
and thus the entire processor array. To label such a cluster requires global propagation
of information. Thus the labeling step is not ideally matched to a distributed memory
architecture, and requires an efficient parallel algorithm.
4.1 2D Quad Algorithm
Our cluster identification method is based on Belkhale and Banerjee's Quad algorithm for
geometric connected component labeling [1], which was developed to label connected sets
of rectangles that represent VLSI circuits in a plane. It is straightforward to apply the
same algorithm to label clusters in a 2D lattice of spin values. A brief description of the
Quad algorithm as applied to a 2D lattice of spins is presented here. For a more complete
description of the Quad algorithm, see [1].
The cluster labeling algorithm consists of a local labeling phase and a global combining
phase. First, the global 2D lattice is partitioned blockwise across the processor array. Each
processor labels the clusters in its local partition of the plane with some sequential labeling
algorithm. The Quad algorithm merges the local labels across processors to assign the correct
global label to each spin site on each processor.
On P processors, the Quad algorithm takes log P stages, during which each processor
determines the correct global labels for spins in its partition of the plane. Before each stage,
each processor has knowledge of a rectangular information region that spans an ever-growing
section of the plane. Intuitively, processor Q's information region represents the portion of
the global domain from which Q has already collected the information necessary to label Q's
local spins. The data associated with an information region consists of
ffl A list of labels of clusters that touch at least one border of the information region.
These clusters are called CCOMP sets.
ffl For each of the four borders of the information region, a list representing the off-
processor bonds that touch the border.
Each bond in a border list connects a spin site in the current information region with a
spin site that is outside the region. Each bond is associated with the CCOMP set containing
the local spin site. The border list data structure is a list of offsets into the list of CCOMP
set labels, where each offset represents one bond on the border. This indirect representation
facilitates Union-Find cluster mergers, which are described below (see figure 1).
l 1
l
l
l
l
l
CCOMP Labels
Border Bond Lists
Figure
1: Fields of an information region data structure.
The initial information region for a processor consists of the CCOMP set labels and
border lists for its local partition of the plane. At each stage, each processor Q 1 exchanges
messages with a processor Q 2 such that Q 1 and Q 2 's information regions are adjacent. The
messages contain the CCOMP set labels and border lists of the current information region.
Processor merges the CCOMP sets on the common border of the two information regions
using a Union-Find data structure [21]. The other border lists of the two information regions
are concatenated to form the information region for processor Q 1 in the next stage. In this
manner, the size of a processor's information region doubles at each stage so after log P
stages each processor's information region spans the entire plane. Figure 2 illustrates how
the information region grows to span the entire global domain.
For a planar topology, a processor's global combining is complete when its information
region spans the entire plane. If the global domain has a toroidal topology, clusters on
opposite borders of the last information region are merged in a post-processing step.
current information region
stage 1 stage 2
stage 4
stage 3
partner information region
done
Figure
2: Information regions in each stage of the Quad algorithm. There are sixteen
processors. At each stage, the information region of the processor in the top left corner is
the current information region. The partner processor and its information region are also
shown. In each stage, the two information regions are merged, forming the information
region for the subsequent stage.
4.2 Extending the Quad Algorithm to Higher Dimensions
A straightforward extension of the Quad algorithm to more than two dimensions results in
fairly complex multidimensional information region data structures. To simplify implemen-
tation, we present a multidimensional extension using an abstract dimension-independent
information region representation.
The divide-and-conquer Quad algorithm strategy can be naturally extended to d ? 2
dimensions by partitioning the global domain into d-dimensional blocks, and assigning them
one to a processor. Each processor performs a sequential labeling method on its local domain,
and then the domains are translated into information regions for the global combining step.
An information region represents a d-dimensional subset of the d-dimensional global domain.
These d-dimensional information regions are merged at each stage of the algorithm, so after
log P stages the information region spans the entire global domain.
In two dimensions, the list of bonds on each border is just a 1D list, corresponding to the
1D border between two 2D information regions. Since bonds do not exist at every lattice
site, the border lists are sparse. For a 3D lattice, the border lists must represent sparse 2D
borders. In general, the border between two d-dimensional information regions is a d \Gamma 1-
dimensional hyperplane. Thus a straightforward 3D or 4D implementation would be much
more complex than in two dimensions, because sparse multidimensional hyperplanes must
be communicated and traversed in order to merge clusters.
To avoid this complication, note that if we impose an order on the bonds touching an
information region border, the actual spatial location of each bond within the border is not
needed to merge sets across processors. As long as each processor stores the border bonds
in the same order, we can store the bonds in a 1D list and merge clusters from different
processors by traversing corresponding border lists in order. Figure 3 illustrates this for 3D
lattices. This concept was first applied by Fink et al. to the 3D Quad algorithm [2], and a
Figure
3: The 2D borders of a 3D information region are linearized by enumerating the
border bonds in the same order on each processor.
similar optimization was applied to 3D lattices by Bauernfeind et al.[15].
We define an order on the border bonds by considering each (d \Gamma 1)-dimensional border
as a subset of the d-dimensional global lattice. Enumerate the bonds touching a
dimensional border in column-major order relative to the d-dimensional global lattice. Since
each processor enumerates the sites relative to the same global indices, each processor stores
the sets on a border in the same order, without regard to the orientation of the border in
space.
This ordering linearizes (d \Gamma 1)-dimensional borders, resulting in an abstract information
region whose border representations are independent of the problem dimensionality. When
two of these information regions are merged, the order of bonds in the new border lists is
consistent on different processors. Therefore, the logic of merging clusters on a border of
two information regions does not change for multidimensional lattices. No sparse hyperplane
data structures are required, and a 2D cluster implementation can be extended to 3D and
4D with few modifications.
4.3 Performance analysis
Belkhale and Banerjee show that the 2D Quad algorithm for VLSI circuit extraction runs in
is the number of processors, ff() is the inverse of
Ackerman's function, t s is the message startup time, t b is the communication time per byte,
and B is is the number of border rectangles along a cross section of the global domain [1].
The number of border rectangles in VLSI circuit extraction applications corresponds to the
number of border bonds in cluster identification applications. For cluster identification on a
lattice, let N be the lattice size and p be the probability that there is a bond between
two adjacent lattice points. Then
giving a running time of O(log P t s
N)).
For a d-dimensional problem, define N and p as above. Assume the global domain is
a d-dimensional hypercube with sides N 1=d , which is partitioned onto the d-dimensional
logical hypercube of processors with sides P 1=d . Suppose at stage i a processor's information
region is a hypercube with sides of length a. Then at stage i + d the information region is
a hypercube with sides of length 2a. Thus, the surface area of information region increases
by a factor of 2 d\Gamma1 every d stages. Let b(i) be the surface area of the information region at
stage i. Then b(i) is at most
d
e (1)
It is easy to see that
d . The total number of bonds on the border of an information
region is proportional to the surface area. Summing over log P stages, we find the total
number of bytes that a processor communicates during the algorithm is O(2dpN
d ). There
are log P message starts, so the total time spent in communication is O(log
The total number of Union-Find operations performed by a processor at each stage is equal
to the number of bonds on a border of the information region. Using the path compression
and union by rank optimizations of Union-Find operations [21], the total work spent merging
clusters is O(pdN
d ff(pdN
d )). (Our implementation uses the path compression heuristic
but not union-by-rank.) Adding together communication and computation, the running
time for global combining is O(log
d ff(pdN
d )).
Breadth-First Search(BFS) has been shown to be an efficient algorithm to perform the
sequential local labeling step [5]. Since BFS runs in O(jV j+jEj) [21], the local labeling phase
runs in O(( N
)). Thus, for any dimension lattice, the time for the local phase will
dominate the time for the global phase as long as N is large. However, as d increases, the
global time increases relative to the local time for a fixed problem size. We must therefore
scale the problem size along with the problem dimensionality in order to realize equivalent
parallel efficiency for higher dimensional lattices.
Optimizations
One limitation of the Quad algorithm is that the surface area of the information region
grows in each stage. By the last stage, each processor must handle a cross-section of the
entire global domain. With many processors and large problem sizes, this can degrade the
algorithm's performance [1]. To mitigate this effect, we have developed optimizations that
exploit properties of the cluster configuration for better performance.
In Monte Carlo Ising model simulations, the cluster configuration structure depends
heavily on the coupling constant -. Recall that the probability that a bond exists between
two adjacent same-valued spins is . For subcritical (low) -, bonds are relative
sparse and most clusters are small. For supercritical (high) -, bonds are relatively dense
and one large cluster tends to permeate the entire lattice. At criticality, the system is in
transition between these two cases, and the cluster configurations are combinations of small
and large clusters.
How any particular spin affects the labels of other spins depends on the cluster configuration
properties. We identify the following two types of locality that may exist in a cluster
configuration:
clusters only affect cluster labels in a limited area.
ffl Type 2: Adjacent lattice points are likely to belong the same cluster.
Subcritical configurations exhibit Type 1 locality, and supercritical configurations exhibit
Type 2 locality. Configurations at criticality show some aspects of both types.
Belkhale and Banerjee exploit Type 1 locality in two dimensions with the Overlap Quad
algorithm [1]. In this algorithm, information regions overlap and only clusters that span the
overlap region must be merged. Intuitively, small clusters are eliminated in early stages of the
algorithm, leaving only large clusters to merge in later stages. The Overlap Quad algorithm
requires that the positions of bonds within borders be maintained, precluding use of an
abstract dimension-independent information region data structure. Instead, we present two
simpler optimizations, Bubble Elimination and Border Compression. These optimizations
work with the abstract border representations, so they are no more complicated to implement
in d ? 2 dimensions than in 2D.
5.1 Bubble Elimination
Bubble Elimination exploits Type 1 locality by eliminating small clusters in a preprocessing
phase to the Quad algorithm. A local cluster that touches only one border of the information
region is called a bubble. Immediately after initializing its information region, each processor
identifies the bubbles along each border. This information is exchanged with each neighbor,
and clusters marked as bubbles on both sides of a border are merged and deleted from the
borders. Thus, small clusters are eliminated from the information regions before performing
the basic Quad algorithm. During the course of the Quad algorithm, communication and
computation is reduced since the bubble clusters are not considered.
Bubble elimination incurs a communication overhead of 3 d \Gamma1 messages for a d-dimensional
problem. If we communicate with Manhattan neighbors only, the communication overhead
drops to 2d messages. Although bubbles on the corners and edges of an information region
are not eliminated, this effect is insignificant if the granularity of the problem is sufficiently
large.
5.2 Border Compression
Border Compression exploits Type 2 locality by changing the representation of the border
lists. We compress the representation of each list using run-length encoding [22]. That is,
a border list of set labels is replaced by a sequence of pairs ((l
where s(l i ) is the number of times value l i appears in succession in a border list.
If Type 2 locality is prevalent, border compression aids performance in two ways: it
reduces the length of messages, and we can exploit the compressed representation to reduce
the number of Union-Find operations that are performed. Before two compressed borders are
merged, they are decompressed to form two corresponding lists of cluster labels to combine.
From the compressed representation, it is simple to determine when two clusters are merged
together several times in succession. During decompression, it is simple to filter redundant
mergers out of the lists, reducing the number of Union-Find mergers to be performed. Thus,
border compression reduces both communication volume and computation.
For some cluster configurations, bubble elimination increases the effectiveness of border
compression. Suppose the global cluster configuration resembles swiss cheese, in that there
are many small clusters interspersed with one large cluster. This phenomenon occurs in Ising
model cluster configurations with - at or above criticality. Bubble elimination removes most
small clusters during preprocessing, leaving most active clusters belonging to the one large
cluster. In this case, there will be long runs of identical labels along a border of an information
region. Border compression collapses these runs, leaving small effective information region
borders.
6 Performance Results
6.1 Implementation
We have implemented the cluster algorithm and Ising model simulation in 2D, 3D, and
4D with C++. The global lattice has a toroidal topology in all directions. When using
bubble elimination, only Manhattan neighbors are considered. The local labeling method is
Breadth-First Search [21].
According to the Swendson-Wang algorithm, clusters must be flipped randomly after
the cluster identification step. For a spatially decomposed parallel implementation, it is
necessary that all processors obtain consistent pseudorandom numbers when generating the
new spins for clusters that span more than one processor. In our implementation, we generate
the new random spins for each local cluster prior to global cluster merging, and store the
new spin as the high-order bit in the cluster label. Thus, after cluster merging, all spins in
a cluster are guaranteed to be consistent.
To simplify implementation in more than 2 dimensions, we use the LPARX programming
library, version 1.1 [23]. LPARX manages data distribution and communication between
Cartesian lattices, and greatly simplifies the sections of the code that manages the regular
spin lattice. The kernel of the cluster algorithm is written in a message-passing style using the
message-passing layer of LPARX[24], a generic message-passing system resembling the
Message Passing Interface[25]. Since the cluster algorithm is largely dimension-independent,
the message-passing code is almost identical for each problem dimensionality. In fact, the
same code generates the 2D,3D, and 4D versions; compile-time macros determine the problem
dimensionality.
The code was developed and debugged on a Sun workstation using LPARX mechanisms
to simulate parallel processes and message-passing. All performance results were obtained
from runs on an Intel Paragon under OSF/1 R1.2, with 32MB/node. The code was compiled
with gcc v.2.5.7 with optimization level 2.
6.2 Performance
The total cluster identification time consists of a local stage, to perform local labeling with
a sequential algorithm, and a global stage, to combine labels across processors using the
multidimensional Quad algorithm. All times reported are wall clock times and were obtained
with the Paragon dclock() system call, which has an accuracy of 100 nanoseconds.
Intuitively, we expect the benefits from bubble elimination and border compression to
vary with -, the coupling constant. Figures 4, 5, and 6 show the global stage running times
at varying values of -. For the problem sizes shown, the critical region occurs at - c - 0:221
in 2D, - c - 0:111 in 3D, and - c - 0:08 in 4D.
Since the surface area-to-volume ratio is larger in 3D and 4D than in 2D, the optimizations
are more important for these problems. As expected, figures 5 and 6 show that bubble
elimination is effective in the subcritical region, and border compression is effective in the
supercritical region. In the critical region, the two optimizations together are more effective
than either optimization alone. Presumably this is due to the "swiss cheese" effect discussed
in Section 5. Together the optimizations improve performance by 35-85%, depending on -,
in both 3D and 4D.
Kappa1.03.05.07.0
Global
Combining
Time
Per
4D Global Combining Time
Nodes, 68x68x34x34 Lattice
Bubble Elimination
Border Compression
Both
In 2D, the optimizations improve performance by 20-70%, but do not show the intuitive
dependence on - as in 3D and 4D. We suspect this is due to cache effects. As - increases,
the number of global clusters decreases. Thus, during cluster merging, more union-find
data structure accesses will be cache hits at higher - since a greater portion of the union-
find data structure fits in cache. In 2D, the surface area-to-volume ratio is low, so these
union-find accesses become the dominant factor in the algorithm's performance. In 3D and
4D, information region borders are much larger, overflowing the cache and causing many
more cache misses traversing the borders of the information region. Since these borders are
larger than the union-find data structures, union-find data structure memory accesses are
less critical.
Figure
7 shows the relative costs of the local stage and global stages with and without
optimizations. The breakdown shows that in 2D, the local labeling time dominates the global
time, so the benefit from optimizations is limited by Amdahl's law [26]. However, in 3D and
4D, the global stage is the bottleneck in the computation, so the two optimizations have a
significant impact on performance.
Timing results are instructive, but depend on implementation details and machine ar-
chitecture. To evaluate the optimizations with a more objective measure, Table 1 shows the
total number of bytes transmitted in the course of the Quad algorithm. Since the amount
of work to merge clusters is directly proportional to the length of messages, these numbers
give a good indication of how successfully the optimizations exploit various cluster configu-
rations. The communication volume reduction varies depending on the cluster configuration
structure, ranging up to a factor of twenty to one.
Since physicists are interested in using parallel processing capability to run larger problems
than possible on a single workstation, it is appropriate to examine the algorithm's
performance as the problem size and number of processors grow. For an ideal linear parallel
Time
per
Site
(ns)
Normalized Labeling Perfomance
Local Labeling
Global Labeling
Optimization
Both
Optimizations
subcritical critical supercritical
Figure
7: Breakdown of algorithm costs, normalized per spin site. All runs here are with 64
processors of an Intel paragon. The lattice sizes are 4680x4680 in 2D, 280x280x280 in 3D,
and 68x68x34x34 in 4D. For subcritical runs, in 3D, and 0.04 in 4D. For
critical runs, in 2D, 0.111 in 3D, and 0.08 in 4D. For supercritical runs,
in 2D, 0.2 in 3D, and 0.2 in 4D.
Opt. Elim. Compress
2D 4680x4680 lattice
3D 280x280x280 lattice
4D 68x68x34x34 lattice
Table
1: Total number of bytes transmitted during global combining. All runs are with 64
processors.
algorithm, if the problem size and number of processors are scaled together asymptotically,
the running time remains constant. Due to the global nature of the cluster identification
problem, the basic Quad algorithm cannot achieve ideal scaled speedup in practice. Since
the Quad algorithm takes log P stages, the global work should increase by at least log P . A
further difficulty is that in d dimensions, the work in the last stage of the algorithm doubles
every d stages.
However, the bubble elimination and border compression optimizations vastly reduce the
work in later stages of the algorithm. Thus, with the optimizations, we can get closer to
achieving ideal scaled speedup. Table 2 shows these scaled speedup results for a fixed number
of spins sites per processor for critical cluster configurations. The results show that as the
number of processors and problem size are scaled together, the performance benefit from the
optimizations increases. In 2D, the scaled speedup with the optimizations is nearly ideal.
The 3D and especially 4D versions do not scale as well, although figure 7 shows that better
performance is achieved away from criticality.
Although the optimizations were developed with the multidimensional Quad algorithm
in mind, we conjecture that they would also be effective for other cluster identification
algorithms, such as relaxation methods [4, 6, 7]. The multidimensional Quad algorithm and
optimizations may be also be appropriate for other variants of connected component labeling.
One open question is whether the border compression and bubble elimination optimizations
would effectively exploit the graph structure of other applications, such as image component
labeling applications.
7 Conclusion
We have presented an efficient multidimensional extension to Belkhale and Banerjee's Quad
algorithm for connected component labeling. Since the extension deals with abstract spatial
Number of No Optimizations Both Optimizations
Processors 2D 3D 4D 2D 3D 4D
Table
2: Global combining time, in seconds, when the lattice size and number of processors
is scaled together. Each processor's partition is 585x585 in 2D, 70x70x70 in 3D, and
17x17x17x17 in 4D. All runs are at - c .
connectivity information, distributed data structures are managed in a dimension-independent
manner. This technique considerably simplifies implementations in more than two dimen-
sions. We introduced two optimizations to the basic algorithm that effectively exploit locality
in Ising model cluster configurations. Depending on the structure of cluster configurations,
the optimizations improve performance by 20-80% on the Intel Paragon. With the opti-
mizations, large lattices can be labeled on many processors with good parallel efficiency.
The optimizations are especially important in more than two dimensions, where the surface
area-to-volume ratio is high.
--R
"Parallel algorithms for geometric connected component labeling on a hypercube multiprocessor,"
"Cluster identification on a distributed memory multiprocessor,"
"Nonuniversal critical dynamics in monte carlo simu- lations,"
"A parallel cluster labeling method for monte carlo dy- namics,"
"Cluster identification algorithms for spin models - sequential and parallel,"
"Parallelization of the Ising model and its performance evaluation,"
"Swendson-wang dynamics on large 2d critical Ising models,"
"A multi-grid cluster labeling scheme,"
"A parallel multigrid algorithm for percolation clusters,"
"Parallel simulation of the Ising model,"
"Paralleliza- tion of the 2d swendson-wang algorithm,"
"Evaluation of connected component labeling algorithms on shared and distributed memory multiprocessors,"
"Component labeling algorithms on an intel ipsc/2 hypercube,"
"A vectorized algorithm for cluster formation in the Swendson-Wang dynam- ics,"
"3D Ising model with swendson-wang dynamics: A parallel approach,"
Statistical Field Theory.
"Crystal statistics. i. a two-dimensional model with an order-disorder tran- sition,"
"Numerical investigation of the interface tension in the three-dimensional Ising model,"
"Broken phase of the 4-dimensional Ising model in a finite volume,"
Computer simulation methods in theoretical physics.
Introduction to Algorithms.
"A robust parallel programming model for dynamic, non-uniform scientific computation,"
"The LPARX user's guide v2.0,"
Message Passing Interface Forum
Computer Architecture A Quantitative Approach.
--TR
--CTR
Scott B. Baden, Software infrastructure for non-uniform scientific computations on parallel processors, ACM SIGAPP Applied Computing Review, v.4 n.1, p.7-10, Spring 1996 | swendson-wang dynamics;parallel algorithm;cluster identification;ising model;connected component labeling |
271442 | Prior Learning and Gibbs Reaction-Diffusion. | AbstractThis article addresses two important themes in early visual computation: First, it presents a novel theory for learning the universal statistics of natural imagesa prior model for typical cluttered scenes of the worldfrom a set of natural images, and, second, it proposes a general framework of designing reaction-diffusion equations for image processing. We start by studying the statistics of natural images including the scale invariant properties, then generic prior models were learned to duplicate the observed statistics, based on the minimax entropy theory studied in two previous papers. The resulting Gibbs distributions have potentials of the form $U\left( {{\schmi{\bf I}};\,\Lambda ,\,S} \right)=\sum\nolimits_{\alpha =1}^K {\sum\nolimits_{ \left( {x,y} \right)} {\lambda ^{\left( \alpha \right)}}}\left( {\left( {F^{\left( \alpha \right)}*{\schmi{\bf I}}} \right)\left( {x,y} \right)} \right)$ with being a set of filters and the potential functions. The learned Gibbs distributions confirm and improve the form of existing prior models such as line-process, but, in contrast to all previous models, inverted potentials (i.e., (x) decreasing as a function of |x|) were found to be necessary. We find that the partial differential equations given by gradient descent on U(I; , S) are essentially reaction-diffusion equations, where the usual energy terms produce anisotropic diffusion, while the inverted energy terms produce reaction associated with pattern formation, enhancing preferred image features. We illustrate how these models can be used for texture pattern rendering, denoising, image enhancement, and clutter removal by careful choice of both prior and data models of this type, incorporating the appropriate features. | texture pattern rendering, denoising, image enhancement and clutter removal by careful
choice of both prior and data models of this type, incorporating the appropriate features.
Song Chun Zhu is now with the Computer Science Department, Stanford University,
Stanford, CA 94305, and David Mumford is with the Division of Applied Mathematics,
Brown University, Providence, RI 02912. This work started when the authors were at
Harvard University.
1 Introduction and motivation
In computer vision, many generic prior models have been proposed to capture the
universal low level statistics of natural images. These models presume that surfaces
of objects be smooth, and adjacent pixels in images have similar intensity values
unless separated by edges, and they are applied in vision algorithms ranging from
image restoration, motion analysis, to 3D surface reconstruction.
For example, in image restoration general smoothness models are expressed as
probability distributions [9, 4, 20, 11]:
where I is the image, Z is a normalization factor, and r x I(x;
r y I(x; are differential operators. Three typical forms of
the potential function /() are displayed in figure 1. The functions in figure 1b, 1c
have flat tails to preserve edges and object boundaries, and thus they are said to
have advantages over the quadratic function in figure 1a.
Figure
Three existing forms for /(). a, Quadratic:
2 . b, Line process:
These prior models have been motivated by regularization theory [26, 18], 1 phys-
1 Where the smoothness term is explained as a stabilizer for solving "ill-posed" problems [32].
ical modeling [31, 4], 2 Bayesian theory [9, 20] and robust statistics [19, 13, 3]. Some
connections between these interpretations are also observed in [12, 13] based on
effective energy in statistics mechanics. Prior models of this kind are either generalized
from traditional physical models [37] or chosen for mathematical convenience.
There is, however, little rigorous theoretical or empirical justification for applying
these prior models to generic images, and there is little theory to guide the construction
and selection of prior models. One may ask the following questions.
1. Why are the differential operators good choices in capturing image features?
2. What is the best form for p(I) and /()?
3. A relevant fact is that real world scenes are observed at more or less arbitrary
scales, thus a good prior model should remain the same for image features
at multiple scales. However none of the existing prior models has the scale-invariance
property on the 2D image lattice, i.e., is renormalizable in terms
of renormalization group theory [36].
In previous work on modeling textures, we proposed a new class of Gibbs distributions
of the following form [40, 41],
e \GammaU (I; ;S) ; (2)
In the above equation, is a set of linear filters, and
is a set of potential functions on the features extracted
by S. The central property of this class of models is that they can reproduce the
marginal distributions of F (ff) I estimated over a set of the training images I - while
having the maximum entropy - and the best set of features fF (1) ; F (2) ; :::; F (K) g is
If /() is quadratic, then variational solutions minimizing the potential are splines, such as
flexible membrane or thin plate models.
selected by minimizing the entropy of p(I) [41]. The conclusion of our earlier papers
is that for an appropriate choice of a small set of filters S, random samples from
these models can duplicate very general classes of textures - as far as normal human
perception is concerned. Recently we found that similar ideas of model inference
using maximum entropy have also been used in natural language modeling[1].
In this paper, we want to study to what extent probability distributions of this
type can be used to model generic natural images, and we try to answer the three
questions raised above.
We start by studying the statistics of a database of 44 real world images, and then
we describe experiments in which Gibbs distributions in the form of equation (2)
were constructed to duplicate the observed statistics. The learned potential functions
can be classified into two categories: diffusion terms
which are similar to figure 1c, and reaction terms which, in contrast to all previous
models, have inverted potentials (i.e. -(x) decreasing as a function of jxj).
We find that the partial differential equations given by gradient descent on
are essentially reaction-diffusion equations, which we call the Gibbs Reaction
And Diffusion Equations (GRADE). In GRADE, the diffusion components
produce denoising effects which is similar to the anisotropic diffusion [25], while
reaction components form patterns and enhance preferred image features.
The learned prior models are applied to the following applications.
First, we run the GRADE starting with white noise images, and demonstrate
how GRADE can easily generate canonical texture patterns such as leopard blobs
and zebra stripe, as the Turing reaction-diffusion equations do[34, 38]. Thus our
theory provides a new method for designing PDEs for pattern synthesis.
Second, we illustrate how the learned models can be used for denoising, image
enhancement and clutter removal by careful choice of both prior and noise models
of this type, incorporating the appropriate features extracted at various scales
and orientations. The computation simulates a stochastic process - the Langevin
equations - for sampling the posterior distribution.
This paper is arranged as follows. Section (2) presents a general theory for prior
learning. Section (3) demonstrates some experiments on the statistics of natural
images and prior learning. Section (4) studies the reaction-diffusion equations.
Section (5) demonstrates experiments on denoising, image enhancement and clutter
removal. Finally section (6) concludes with a discussion.
2 Theory of prior learning
2.1 Goal of prior learning and two extreme cases
We define an image I on an N \Theta N lattice L to be a function such that for any
pixel (x; y), I(x; y) 2 L, and L is either an interval of R or L ae Z. We assume
that there is an underlying probability distribution f(I) on the image space L N 2
for general natural images - arbitrary views of the world. Let NI
2::; Mg be a set of observed images which are independent samples from f(I).
The objective of learning a generic prior model is to look for common features and
their statistics from the observed natural images. Such features and their statistics
are then incorporated into a probability distribution p(I) as an estimation of f(I),
so that p(I), as a prior model, will bias vision algorithms against image features
which are not typical in natural images, such as noise distortion and blurring. For
this objective, it is reasonable to assume that any image features have equal chance
to occur at any location, so f(I) is translation invariant with respect to (x; y). We
will discuss the limits of this assumption in section (6).
To study the properties of images fI obs
we start from exploring
a set of linear filters which are characteristic
of the observed images. The statistics extracted by S are the empirical marginal
distributions (or histograms) of the filter responses.
Given a probability distribution f(I), the marginal distribution of f(I)
with respect to F (ff) is,
Z
Z
F (ff) \LambdaI(x;y)=z
where 8z 2 R and ffi() is a Dirac function with point mass concentrated at 0.
Given a linear filter F (ff) and an image I, the empirical marginal
distribution (or histogram) of the filtered image F (ff) I(x; y) is,
We compute the histogram averaged over all images in NI obs as the observed statistics
obs (z) =M
H (ff) (z; I obs
If we make a good choice of our database, then we may assume that - (ff)
obs (z) is
an unbiased estimate for f (ff) (z), and as M ! 1, - (ff)
obs (z) converges to f (ff)
Now, to learn a prior model from the observed images fI obs
immediately we have two simple solutions. The first one is,
Y
obs is the observed average histogram of the image intensities, i.e., the filter
used. Taking / 1
obs (z), we rewrite equation (4) as,
The second solution is:
Let kI obs
to the Potts model [37].
These two solutions stand for two typical mechanisms for constructing probability
models in the literature. The first is often used for image coding [35], and the
second is a special case of the learning scheme using radial basis functions (RBF)
[27]. 3 Although the philosophies for learning these two prior models are very differ-
ent, we observe that they share two common properties.
1. The potentials / 1 (), / 2 () are built on the responses of linear filters. In equation
(7), I obs
are used as linear filters of size N \Theta N pixels,
which we denote by F
n .
2. For each filter F (ff) chosen, p(I) in both cases duplicates the observed marginal
distributions. It is trivial to prove that E p [H (ff) (z;
obs (z), thus as M
increases,
This second property is in general not satisfied by existing prior models in equation
(1). p(I) in both cases meets our objective for prior learning, but intuitively
they are not good choices. In equation (5), the ffi() filter does not capture spatial
structures of larger than one pixels, and in equation (7), filters F (obsn) are too
specific to predict features in unobserved images.
In fact, the filters used above lie in the two extremes of the spectrum of all linear
filters. As discussed by Gabor [7], the ffi filter is localized in space but is extended
uniformly in frequency. In contrast, some other filters, like the sine waves, are well
3 In RBF, the basis functions are presumed to be smooth, such as a Gaussian function. Here,
using ffi () is more loyal to the observed data.
localized in frequency but are extended in space. Filter F (obsn) includes a specific
combination of all the components in both space and frequency. A quantitative
analysis of the goodness of these filters is given in table 1 at section (3.2).
2.2 Learning prior models by minimax entropy
To generalize the two extreme examples, it is desirable to compute a probability
distribution which duplicates the observed marginal distributions for an arbitrary
set of filters, linear or nonlinear. This goal is achieved by a minimax entropy theory
studied for modeling textures in our previous papers [40, 41].
Given a set of filters fF (ff) observed statistics f- (ff)
obs
Kg, a maximum entropy distribution is derived which has the following
Gibbs form:
In the above equation, we consider linear filters only, and
is a set of potential functions on the features extracted by S. In practice, image
intensities are discretized into a finite gray levels, and the filter responses are divided
into a finite number of bins, therefore - (ff) () is approximated by a piecewisely
constant functions, i.e., a vector, which we denote by - (ff)
The - (ff) 's are computed in a non-parametric way so that the learned p(I; ; S)
reproduces the observed statistics:
Therefore as far as the selected features and their statistics are concerned, we cannot
distinguish between p(I; ; S) and the "true" distribution f(I).
Unfortunately, there is no simple way to express the - (ff) 's in terms of the - (ff)
obs 's
as in the two extreme examples. To compute - (ff) 's, we adopted the Gibbs sampler
[9], which simulates an inhomogeneous Markov chain in image space L jN 2 j . This
Monte Carlo method iteratively samples from the distribution p(I; ; S), followed
by computing the histogram of the filter responses for this sample and updating the
- (ff) to bring the histograms closer to the observed ones. For a detailed account of
the computation of - (ff) 's, the readers are referred to [40, 41].
In our previous papers, the following two propositions are observed.
Proposition 1 Given a filter set S, and observed statistics f- (ff)
there is an unique solution for Kg.
Proposition 2 f(I) is determined by its marginal distributions, thus
if it reproduces all the marginal distributions of linear filters.
But for computational reasons, it is often desirable to choose a small set of filters
which most efficiently capture the image structures. Given a set of filters S, and
an ME distribution p(I; ; S), the goodness of p(I; ; S) is often measured by the
Kullback-Leibler information distance between p(I; ; S) and the ideal distribution
Z
Z
f(I) log f(I)
Then for a fixed model complexity K, the best feature set S is selected by the
following criterion,
where S is chosen from a general filter bank B such as Gabor filters at multiple
scales and orientations.
Enumerating all possible sets of features S in the filter bank and comparing
their entropies is computational too expensive. Instead, in [41] we propose a step-wise
greedy procedure for minimizing the KL-distance. We start from
a uniform distribution, and introduce one filter at a time. Each added
filter is chosen to maximally decrease the KL-distance, and we keep doing this until
the decrease is smaller than a certain value.
In the experiments of this paper, we have used a simpler measure of the "infor-
mation gain" achieved by adding one new filter to our feature set S. This is roughly
an L 1 -distance (vs. the L 2 -measure implicit in the Kullback-Leibler distance), the
readers are referred to [42] for a detailed account).
and p(I; ; S) defined above, the information
criterion (IC) for each filter F (fi) 2 B=S at step K
kH (fi) (z; I obs
kH (fi) (z; I obs
obs (z)k
we call the first term average information gain (AIG) by choosing F (fi) , and the
second term average information fluctuation (AIF ).
Intuitively, AIG measures the average error between the filter responses in the
database and the marginal distribution of the current model p(I; ; S). In practice,
we need to sample p(I; ; S), thus synthesize images fI syn
estimate E p(I; ;S) [H (fi) (z; I)] by - (fi)
n ). For a filter F (fi) , the
bigger AIG is, the more information F (fi) captures as it reports the error between
the current model and the observations. AIF is a measure of disagreement between
the observed images. The bigger AIF is, the less their responses to F (ff) have in
common.
3 Experiments on natural images
This section presents experiments on learning prior models, and we start from exploring
the statistical properties of natural images.
Figure
6 out of the 44 collected natural images.
3.1 Statistic of natural images
It is well known that natural images have statistical properties distinguishing them
from random noise images [28, 6, 24]. In our experiments, we collected a set of 44
natural images, six of which are shown in figure 2. These images are from various
sources, some digitized from books and postcards, and some from a Corel image
database. Our database includes both indoor and outdoor pictures, country and
urban scenes, and all images are normalized to have intensity between 0 and 31.
As stated in proposition (2), marginal distributions of linear filters alone are
capable of characterizing f(I). In the rest of this paper, we shall only study the
following bank B of linear filters.
1. An intensity filter ffi().
2. Isotropic center-surround filters, i.e., the Laplacian of Gaussian filters.
const
2oe stands for the scale of the filter. We denote these filters by LG(s).
A special filter is LG(
), which has a 3 \Theta 3 window [0;
we denote it by \Delta.
3. Gabor filters with both sine and cosine components, which are models for the
frequency and orientation sensitive simple cells.
const
It is a sine wave at frequency 2-
s modulated by an elongated Gaussian function, and
rotated at angle '. We denote the real and image parts of G(x;
and Gsin(s; '). Two special Gsin(s; ') filters are the gradients r x ; r y .
4. We will approximate large scale filters by filters of small window sizes on the
high level of the image pyramid, where the image in one level is a "blown-down"
version (i.e., averaged in 2 \Theta 2 blocks) of the image below.
We observed three important aspects of the statistics of natural images.
First, for some features, the statistics of natural images vary widely from image
to image. We look at the ffi() filter as in section (2.1). The average intensity
histogram of the 44 images - (o)
obs is plotted in figure 3a, and figure 3b is the intensity
histogram of an individual image (the temple image in figure 2). It appears that
obs (z) is close to a uniform distribution (figure 3c), while the difference between
figure 3a and figure 3b is very big. Thus IC for filter ffi() should be small (see table
1).
Second, for many other filters, the histograms of their responses are amazingly
consistent across all 44 natural images, and they are very different from the histograms
of noise images. For example, we look at filter r x . Figure 4a is the average
histogram of 44 filtered natural images, figure 4b is the histogram of an individual
filtered image (the same image as in figure 3b), and figure 4c is the histogram of a
filtered uniform noise image.
The average histogram in figure 4a is very different from a Gaussian distribution.
Figure
3 The intensity histograms in domain [0; 31], a, averaged over 44 natural images, b, an individual
natural image, c, a uniform noise image.
Figure
4 The histograms of r x I plotted in domain [-30, 30], a. averaged over 44 natural images, b,
an individual natural image, c, a uniform noise image.
Figure
5 a. The histogram of r x I plotted against Gaussian curve (dashed) of same mean and variance
in domain [\Gamma15; 15]. b, The logarithm of the two curves in a.
To see this, figure 5a plots it against a Gaussian curve (dashed) of the same mean
and same variance. The histogram of natural images has higher kurtosis and heavier
tails. Similar results are reported in [6]. To see the difference of the tails, figure 5b
plots the logarithm of the two curves.
Third, the statistics of natural images are essentially scale invariant with respect
to some features. As an example, we look at filters r x and r y . For each image
I obs
build a pyramid with I [s]
n being the image at the s-th layer. We set
I [0]
n , and let
I [s+1]
I [s]
The size of I [s]
n is N=2 s \Theta N=2 s .
-22
a b c
Figure
2. b. log - x;s
c. histograms of a filtered uniform noise image at scales:
curve), and (dashed curve).
For the filter r x , let - x;s (z) be the average histogram of r x I [s]
Figure 6a plots - x;s (z), for and they are almost identi-
cal. To see the tails more clearly, we display log - x;s (z); in figure 6c.
The difference between them are still small. Similar results are observed for - y;s (z)
the average histograms of r y I obs
n . In contrast, figure 6b plots the histograms
of r x I [s] with I [s] being a uniform noise image at scales 2.
Combining the second and the third aspects above, we conclude that the histograms
of r x I [s]
are very consistent across all observed natural images and
across scales 2. The scale invariant property of natural images is largely
caused by the following facts: 1). natural images contains objects of all sizes, 2).
natural scenes are viewed and made into images at arbitrary distances.
3.2 Empirical prior models
In this section, we learn the prior models according to the theory proposed in
section (2), and analyze the efficiency of the filters quantitatively.
Experiment I.
a b c
Figure
7 The three learned potential functions for filters a. \Delta, b. r x , and c. r y . Dashed curves are the
fitting functions: a. / 1
and c. / 3
We start from We compute the AIF ,
AIG and IC for all filters in our filter bank. We list the results for a small number
of filters in table 1. The filter \Delta has the biggest IC (= 0:642), thus is chosen as
F (1) . An ME distribution p 1 (I; ; S) is learned, and the information criterion for
each filter is shown in the column headed p 1 (I) in table 1. We notice that the IC for
the filter \Delta drops to near 0, and IC also drops for other filters because these filters
are in general not independent of \Delta. Some small filters like LG(1) have smaller ICs
than others, due to higher correlations between them and \Delta.
Figure
8 A typical sample of p 3 (I) (256 \Theta 256 pixels).
The big filters with larger IC are investigated in Experiment II. In this experi-
ment, we choose both r x and r y to be F (2) ; F (3) as in other prior models. Therefore
a prior model p 3 (I) is learned with potential:
are plotted in figure 7. Since - (1)
we only plot - (1) (z) for z 2 [\Gamma9:5; 9:5] and
- (2) (z); - (3) (z) for z 2 [\Gamma22; 22]. These three curves are fitted with the functions
synthesized image sampled
from p 3 (I) is displayed in figure 8.
So far, we have used three filters to characterize the statistics of natural images,
and the synthesized image in figure 8 is still far from natural ones. Especially,
even though the learned potential functions - (ff) (z); tails to
4 In fact, - (1)
obs
obs
, with N \Theta N being the size of synthesized image.
Filter Filter Size AIF AIG IC AIG IC AIG IC AIG IC
I
Table
1 The information criterion for filter selection.
preserve intensity breaks, they only generate small speckles instead of big regions
and long edges as one may expect. Based on this synthesized image, we compute
the AIG and IC for all filters, and the results are list in table 1 in column p 3 (I).
Experiment II.
It is clear that we need large-scale filters to do better. Rather than using the
large scale Gabor filters, we chose to use r x and r y on 4 different scales and
impose explicitly the scale invariant property that we find in natural images. Given
an image I defined on an N \Theta N lattice L, we build a pyramid in the same way as
before. Let I 3 be four layers of the pyramid. Let H x;s (z; x; y) denote
the histogram of r x I [s] (x; y) and H y;s (z; x; y) the histogram of r y I [s] (x; y).
We ask for a probability model p(I) which satisfies,
3:
3:
where L s is the image lattice at level s, and -
-(z) is the average of the observed
histograms of r x I [s] and r y I [s] on all 44 natural images at all scales. This results
in a maximum entropy distribution p s (I) with energy of the following form,
U s (I) =X
Figure
3. At the beginning of the learning process,
all - x;s (); are of the form displayed in figure 7 with low values around
zero to encourage smoothness. As the learning proceeds, gradually - x;3 () turns "up-
side down" with smaller values at the two tails. Then - x;2 () and - x;1 () turn upside
down one by one. Similar results are observed for - y;s (); 3. Figure 11 is
a typical sample image from p s (I). To demonstrate it has scale invariant properties,
in figure 10 we show the histograms H x;s and log H x;s of this synthesized image for
3.
The learning process iterates for more than 10; 000 sweeps. To verify the learned
-()'s, we restarted a homogeneous Markov chain from a noise image using the
learned model, and found that the Markov chain goes to a perceptually similar
image after 6000 sweeps.
Remark 1. In figure 9, we notice that - x;s () are inverted, i.e. decreasing
functions of j z j for distinguishing this model from other prior models
in computer vision. First of all, as the image intensity has finite range [0; 31], r x I [s]
is defined in [\Gamma31; 31]. Therefore we may define - x;s
still well-defined. Second, such inverted potentials have significant meaning in visual
computation. In image restoration, when a high intensity difference r x I [s] (x; y)
is present, it is very likely to be noise if However this is not true for
3. Additive noise can hardly pass to the high layers of the pyramid because
at each layer the 2 \Theta 2 averaging operator reduces the variance of the noise by 4
times. When r x I [s] (x; y) is large for it is more likely to be a true
a b
-5
c d
Figure
9 Learned potential functions - x;s (); 3. The dashed curves are fitting functions:
-22
a b
Figure
a. The histograms of the synthesized image at 4 scales-almost indistinguishable. b. The
logarithm of histograms in a.
Figure
11 A typical sample of p s (I) (384 \Theta 384 pixels).
edge and object boundary. So in p s (I), - x;0 () suppresses noise at the first layer,
while - x;s (); encourages sharp edges to form, and thus enhances blurred
boundaries. We notice that regions of various scales emerge in figure 11, and the
intensity contrasts are also higher at the boundary. These appearances are missing
in figure 8.
Remark 2. Based on the image in figure 11, we computed IC and AIG for
all filters and list them under column p s (I) in table 1. We also compare the two
extreme cases discussed in section (2.1). For the ffi() filter, AIF is very big, and AIG
is only slightly bigger than AIF . Since all the prior models that we learned have
no preference about the image intensity domain, the image intensity has uniform
distribution, but we limit it inside [0; 31], thus the first row of table 1 has the same
value for IC and AIG. For filter I (obsi) ,
M i.e. the biggest among all
filters, and AIG ! 1. In both cases, ICs are the two smallest.
4 Gibbs reaction-diffusion equations
4.1 From Gibbs distribution to reaction-diffusion equations
The empirical results in the previous section suggest that the forms of the potentials
learned from images of real world scenes can be divided into two classes:
upright curves -(z) for which -() is an even function increasing as jzj increases and
inverted curves for which the opposite happens. Similar phenomenon was observed
in our learned texture models [40].
In figure 9, - x;s (z) are fit to the family of functions (see the dashed curves),
are respectively the translation and scaling constants, and kak weights the
contribution of the filter.
In general the Gibbs distribution learned from images in a given application has
potential function of the following form,
OE (ff)
ff=n d +1
Note that the filter set is now divided into two parts
Kg. In most cases S d consists
of filters such as r x ; r y ; \Delta which capture the general smoothness of images, and S r
contains filters which characterize the prominent features of a class of images, e.g.
Gabor filters at various orientations and scales which respond to the larger edges
and bars.
Instead of defining a whole distribution with U , one can use U to set up a
variational problem. In particular, one can attempt to minimize U by gradient
descent. This leads to a non-linear parabolic partial differential equation:
I
F (ff)
ff=n d +1
F (ff)
with F (ff)
\Gammay). Thus starting from an input image I(x;
I in , the first term diffuses the image by reducing the gradients while the second term
forms patterns as the reaction term. We call equation (14) the Gibbs Diffusion And
Reaction Equation (GRADE).
Since the computation of equation (14) involves convolving twice for each of the
selected filters, a conventional way for efficient computation is to build an image
pyramid so that filters with large scales and low frequencies can be scaled down into
small ones in the higher level of the image pyramid. This is appropriate especially
when the filters are selected from a bank of multiple scales, such as the Gabor filters
and the wavelet transforms. We adopt this representation in our experiments.
For an image I, let I [s] be an image at level of a pyramid, and
I I, the potential function becomes,
s
ff
s
s
ff
s
s I [s] (x; y))
We can derive the diffusion equations similarly for this pyramidal representation.
4.2 Anisotropic diffusion and Gibbs reaction-diffusion
This section compares GRADEs with previous diffusion equations in vision.
In [25, 23] anisotropic diffusion equations for generating image scale spaces are
introduced in the following form,
I
where div is the divergence operator, i.e., div( ~
and Malik defined the heat conductivity c(x; as functions of local gradients,
for example:
I
I x
I y ); (16)
Equation (16) minimizes the energy function in a continuous form,
Z Z
are plotted in figure 12. Similar
forms of the energy functions are widely used as prior distributions [9, 4, 20, 11],
and they can also be equivalently interpreted in the sense of robust statistics [13, 3]
x
-0.4
a
Figure
In the following, we address three important properties of the Gibbs reaction-diffusion
equations.
First, we note that equation (14) is an extension to equation (15) on a discrete
lattice by defining a vector field,
~
and a divergence operator,
Thus equation (14) can be written as,
I
Compared to equation (15) which transfers the "heat" among adjacent pixels, equation
transfers the "heat" in many directions in a graph, and the conductivities
are defined as functions of the local patterns not just the local gradients.
Second, in figure 13, OE(-) has round tip for fl - 1, and a cusp occurs at
(0) can be any value in (\Gamma1; 1) as
shown by the dotted curves in figure 13d. An interesting fact is that the potential
function learned from real world images does have a cusp as shown in figure 9a
where the best fit is 0:7. This cusp forms because large part of objects in real
world images have flat intensity appearances, and OE(-) with produce
piecewise constant regions with much stronger forces than fl - 1.
By continuity, OE 0
(-) can be assigned any value in the range [\Gamma!; !] for - 2 [\Gammaffl; ffl]
In numerical simulations, for - 2 [\Gamma!; !] we take
\Gammaoe if oe 2 [\Gamma!; !]
a
-0.4
-0.4
x
Figure
13 The potential function
(-). a,c,
where oe is the summation of the other terms in the differential equation whose
values are well defined. Intuitively when
(0) forms an attractive basin in its neighborhood N (ff) (x; y) specified by the
filter window of F (ff) . For a pixel (u; v) 2 N (ff) (x; y), the depth of the attractive
basin is k!F (ff)
a pixel is involved in multiple zero filter responses,
it will accumulate the depth of the attractive basin generated by each filter. Thus
unless the absolute value of the driving force from other well-defined terms is larger
than the total depth of the attractive basin at (u; v), I(u; v) will stay unchanged. In
the image restoration experiments in later sections, performance
in forming piecewise constant regions.
Third, the learned potential functions confirmed and improved the existing prior
models and diffusion equations, but more interestingly reaction terms are first dis-
covered, and they can produce patterns and enhance preferred features. We will
demonstrate this property in the experiments below.
4.3 Gibbs reaction-diffusion for pattern formation
In the literature, there are many nonlinear PDEs for pattern formation, of which the
following two examples are interesting. (I) The Turing reaction-diffusion equation
which models the chemical mechanism of animal coats [33, 21]. Two canonical patterns
that the Turing equations can synthesize are leopard blobs and zebra stripes
[34, 38]. These equations are also applied to image processing such as image halftoning
[29] and a theoretical analysis can be found in [15]. (II) The Swindale equation
which simulates the development of the ocular dominance stripes in the visual cortex
of cats and monkey [30]. The simulated patterns are very similar to the zebra
stripes.
In this section, we show that these patterns can be easily generated with only
2 or 3 filters using the GRADE. We run equation (14) starting with I(x; as a
uniform noise image, and GRADE converges to a local minimum. Some synthesized
texture patterns are displayed in figure 14.
For all the six patterns in figure 14, we choose F (1)
the Laplacian of
Gaussian filter at level 0 of the image pyramid as the only diffusion filter, and we
fix
(-). For the three patterns in figure 14
a,b,c we choose isotropic center-surround filter LG(
of widow size 7 \Theta 7 pixels
as the reaction filter F (2)
1 at level 1 of the image pyramid, and we set (a = \Gamma6:0;
(-). The differences between these three patterns are caused
by - forms the patterns with symmetric appearances for both
black and white part as shown in figure 14a. As - negative, black blobs
begin to form as shown in figure 14b where - positive -
blobs in the black background as shown in figure 14c where - 6. The general
smoothness appearance of the images is attributed to the diffusion filter. Figure 14d
is generated with two reaction filters - LG(
2) at level 1 and level 2 respectively,
a b c
Figure
14 Leopard blobs and zebra stripes synthesized by GRADEs.
therefore the GRADE creates blobs of mixed sizes. Similarly we selected one cosine
Gabor filter Gcos(4; pixels oriented at 1 as the reaction
filter F (2)
Figure
14f is generated with two reaction filters Gcos(4;
It seems that the leopard blobs and zebra stripes are among the most canonical
patterns which can be generated with easy choices of filters and parameters. As
shown in [40], the Gibbs distribution are capable of modeling a large variety of
texture patterns, but filters and different forms for /(-) have to be learned for a
given texture pattern.
5 Image enhancement and clutter removal
So far we have studied the use of a single energy function U(I) either as the log
likelihood of a probability distribution at I or as a function of I to be minimized by
gradient descent. In image processing, we often need to model both the underlying
images and some distortions, and to maximize a posterior distribution. Suppose the
distortions are additive, i.e., an input image is,
I in = I +C:
In many applications, the distortion images C are often not i.i.d. Gaussian noise,
but clutter with structures such as trees in front of a building or a military target.
Such clutter will be very hard to handle by edge detection and image segmentation
algorithms.
We propose to model clutter by an extra Gibbs distribution, which can be learned
from some training images by the minimax entropy theory as we did for the underlying
image I. Thus an extra pyramidal representation for I in \Gamma I is needed in a
Gibbs distribution form as shown in figure 15. The resulting posterior distributions
are still of the Gibbs form with potential function,
U
where UC () is the potential of the clutter distribution.
Thus the MAP estimate of I is the minimum of U . In the experiments which
we use the Langevin equation for minimization, a variant of simulated annealing
where w(x; is the standard Brownian motion process, i.e.,
w(x;
T (t) is the "temperature" which controls the magnitude of the random fluctuation.
Under mild conditions on U , equation (19) approaches a global minimum of U at
target features
image pyramid
for clutters
image pyramid
for targets
clutter features
observed image
target image clutter image
Figure
15 The computational scheme for removing noise and clutter.
a low temperature. The analyses of convergence of the equations can be found in
[14, 10, 8]. The computational load for the annealing process is notorious, but for
applications like denoising, a fast decrease of temperature may not affect the final
result very much.
Experiment I
In the first experiment, we take UC to be quadratic, i.e. C to be an i.i.d.
Gaussian noise image. We first compare the performance of the three prior models
potential functions are respectively:
U l
the 4-scale energy in equation (12) (22)
l () and / t () are the line-process and T-function displayed in figure 1b and 1c
respectively.
Figure
demonstrates the results: the original image is the lobster boat displayed
in figure 2. It is normalized to have intensity in [0; 31] and Gaussian noise
from N(0; 25) are added. The distorted image is displayed in figure 16a, where
we keep the image boundary noise-free for the convenience of boundary condition.
The restored images using p l (I), p t (I) and p s (I) are shown in figure 16b, 16c, 16d
respectively. p s (I), which is the only model with a reaction term, appears to have
the best effect in recovering the boat, especially the top of the boat, but it also
enhances the water.
Experiment II
In many applications, i.i.d. Gaussian models for distortions are not sufficient.
For example, in figure 17a, the tree branches in the foreground will make image
segmentation and object recognition extremely difficult because they cause strong
edges across the image. Modeling such clutter is a challenging problem in many
applications. In this paper, we only consider clutter as two dimensional pattern
despite its geometry and 3D structure.
We collected a set of images of buildings and a set of images of trees all against
clean background - the sky. For the tree images, we translate the image intensities to
sky. In this case, since the trees are always darker than the build,
thus the negative intensity will approximately take care of the occlusion effects.
We learn the Gibbs distributions for each set respectively in the pyramid, then
such models are respectively adopted as the prior distribution and the likelihood as
in equation (18). We recovered the underlying images by maximizing a posteriori
distribution using the stochastic process.
For example, figure 17b is computed using 6 filters with 2 filters for I: fr
and 4 filters for I C i.e., the potential for I C is,
In the above equation, OE (-) and / (-) are fit to the potential functions learned
from the set of tree images,
a
a
c d
Figure
a. The noise distorted image, b. c. d. are respectively restored images by prior models p l
(I).
Figure
17 a. the observed image, b, the restored image using 6 filters.
So the energy term OE (I(x; y)) forces zero intensity for the clutter image while
allowing for large negative intensities for the dark tree branches.
Figure
18b is computed using 8 filters with 4 filters for I and 4 filters for I C . 13
filters are used for figure 19 where the clutter is much heavier.
As a comparison, we run the anisotropic diffusion process [25] on figure 19a, and
images at iterations are displayed in figure 20. As we can see that
as becomes a flat image. A robust anisotropic diffusion equation is
recently reported in [2].
6 Conclusion
In this paper, we studied the statistics of natural images, based on which a novel
theory is proposed for learning the generic prior model - the universal statistics of
real world scenes. We argue that the same strategy developed in this paper can
be used in other applications. For example, learning probability models for MRI
Figure
a. an observed image, b. the restored image using 8 filters.
a b
Figure
19 a. the observed image, b. the restored image using 13 filters.
a
Figure
20 Images by anisotropic diffusion at iteration
images and 3D depth maps.
The learned prior models demonstrate some important properties such as the
"inverted" potentials terms for patterns formation and image enhancement. The
expressive power of the learned Gibbs distributions allow us to model structured
noise-clutter in natural scenes. Furthermore our prior learning method provides a
novel framework for designing reaction-diffusion equations based on the observed
images in a given application, without modeling the physical or chemical processes
as people did before [33].
Although the synthesized images bear important features of natural images, they
are still far from realistic ones. In other words, these generic prior models can do very
little beyond image restoration. This is mainly due to the fact that all generic prior
models are assumed to be translation invariant, and this homogeneity assumption
is unrealistic. We call the generic prior models studied in this paper the first level
prior. A more sophisticated prior model should incorporate concepts like object
geometry, and we call such prior models second level priors. Diffusion equations
derived from this second level priors are studied in image segmentation [39], and
in scale space of shapes [16]. A discussion of some typical diffusion equations is
given in [22]. It is our hope that this article will stimulate further investigations
on building more realistic prior models as well as sophisticated PDEs for visual
computation.
--R
"A maximum entropy approach to natural language processing"
"Robust anisotropic diffusion"
"On the unification of line processes, outlier re- jection, and robust statistics with applications in early vision"
Visual Reconstruction.
"Relations between the statistics of natural images and the response properties of cortical cells"
"Theory of communication."
"On sampling methods and annealing algorithms"
"Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images"
"Diffusion for global optimization"
"Constrained restoration and the recover of discontinu- ities"
"Parallel and deterministic algorithms for MRFs: surface reconstruction"
"A common framework for image segmentation"
"A renormalization group approach to image processing problems"
The theory and applications of reaction-diffusion equations
"Shapes, shocks, and deformations I: the components of two-dimensional shape and the reaction-diffusion space"
"On information and sufficiency"
"Probabilistic solution of ill-posed problems in computational vision"
"Robust regression methods for computer vision: a review"
"Optimal approximations by piecewise smooth functions and associated variational problems."
"A pre-pattern formation mechanism for mammalian coat markings"
"A general framework for geometry-driven evolution equations"
"Nonlinear image filtering with edge and corner enhance- ment"
"Natural image statistics and efficient coding"
"Scale-space and edge detection using anisotropic diffusion"
"Computational vision and regularization theory"
"Networks for approximation and learning"
"Statistics of natural images: scaling in the woods"
"M-lattice: from morphogenesis to image processing"
"A model for the formation of ocular dominance stripes"
"Multilevel computational processes for visual surface reconstruc- tion"
Solutions of Ill-posed Problems
"The chemical basis of morphogenesis"
"Generating textures on arbitrary surfaces using reaction-diffusion"
"Efficiency of model human image code"
"The renormalization group: critical phenonmena and the Knodo prob- lem,"
Image Analysis
"Reaction-diffusion textures"
"Region Competition: unifying snakes, region grow- ing, Bayes/MDL for multi-band image segmentation"
"Filters, Random Fields, and Minimax Entropy Towards a unified theory for texture modeling"
"Minimax entropy principle and its application to texture modeling"
"Learning generic prior models for visual computa- tion"
--TR
--CTR
Katy Streso , Francesco Lagona, Hidden Markov random field and frame modelling for TCA image analysis, Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications, p.310-315, February 15-17, 2006, Innsbruck, Austria
Ulf Grenander , Anuj Srivastava, Probability Models for Clutter in Natural Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.4, p.424-429, April 2001
Yufang Bao , Hamid Krim, Smart Nonlinear Diffusion: A Probabilistic Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.1, p.63-72, January 2004
Dmitry Datsenko , Michael Elad, Example-based single document image super-resolution: a global MAP approach with outlier rejection, Multidimensional Systems and Signal Processing, v.18 n.2-3, p.103-121, September 2007
Giuseppe Boccignone , Mario Ferraro , Terry Caelli, Generalized Spatio-Chromatic Diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.10, p.1298-1309, October 2002
Thang V. Pham , Arnold W. M. Smeulders, Object recognition with uncertain geometry and uncertain part detection, Computer Vision and Image Understanding, v.99 n.2, p.241-258, August 2005
Song-Chun Zhu, Stochastic Jump-Diffusion Process for Computing Medial Axes in Markov Random Fields, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.11, p.1158-1169, November 1999
Alan L. Yuille , James M. Coughlan, Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for Road Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.2, p.160-173, February 2000
Ying Nian Wu , Song Chun Zhu , Xiuwen Liu, Equivalence of Julesz Ensembles and FRAME Models, International Journal of Computer Vision, v.38 n.3, p.247-265, July-August 2000
Marc Sigelle, A Cumulant Expansion Technique for Simultaneous Markov Random Field Image Restoration and Hyperparameter Estimation, International Journal of Computer Vision, v.37 n.3, p.275-293, June 2000
Song-Chun Zhu, Embedding Gestalt Laws in Markov Random Fields, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.11, p.1170-1187, November 1999
Ann B. Lee , David Mumford , Jinggang Huang, Occlusion Models for Natural Images: A Statistical Study of a Scale-Invariant Dead Leaves Model, International Journal of Computer Vision, v.41 n.1-2, p.35-59, January-February 2001
Daniel Cremers , Florian Tischhuser , Joachim Weickert , Christoph Schnrr, Diffusion Snakes: Introducing Statistical Shape Knowledge into the Mumford-Shah Functional, International Journal of Computer Vision, v.50 n.3, p.295-313, December 2002
Song Chun Zhu , Xiu Wen Liu , Ying Nian Wu, Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo-Toward a 'Trichromacy' Theory of Texture, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.6, p.554-569, June 2000
Scott Konishi , Alan L. Yuille , James M. Coughlan , Song Chun Zhu, Statistical Edge Detection: Learning and Evaluating Edge Cues, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.1, p.57-74, January
J. Sullivan , A. Blake , M. Isard , J. MacCormick, Bayesian Object Localisation in Images, International Journal of Computer Vision, v.44 n.2, p.111-135, September 2001
Norberto M. Grzywacz , Rosario M. Balboa, A Bayesian framework for sensory adaptation, Neural Computation, v.14 n.3, p.543-559, March 2002
Hedvig Sidenbladh , Michael J. Black, Learning the Statistics of People in Images and Video, International Journal of Computer Vision, v.54 n.1-3, p.181-207, August-September
Matthias Heiler , Christoph Schnrr, Natural Image Statistics for Natural Image Segmentation, International Journal of Computer Vision, v.63 n.1, p.5-19, June 2005
Kwang In Kim , Matthias O. Franz , Bernhard Scholkopf, Iterative Kernel Principal Component Analysis for Image Modeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.9, p.1351-1366, September 2005
T. Freeman , Egon C. Pasztor , Owen T. Carmichael, Learning Low-Level Vision, International Journal of Computer Vision, v.40 n.1, p.25-47, Oct. 2000
Stefan Roth , Michael J. Black, On the Spatial Statistics of Optical Flow, International Journal of Computer Vision, v.74 n.1, p.33-50, August 2007
Charles Kervrann , Mark Hoebeke , Alain Trubuil, Isophotes Selection and Reaction-Diffusion Model for Object Boundaries Estimation, International Journal of Computer Vision, v.50 n.1, p.63-94, October 2002
Jens Keuchel , Christoph Schnrr , Christian Schellewald , Daniel Cremers, Binary Partitioning, Perceptual Grouping, and Restoration with Semidefinite Programming, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.11, p.1364-1379, November
Rosario M. Balboa , Norberto M. Grzywacz, The Minimal Local-Asperity Hypothesis of Early Retinal Lateral Inhibition, Neural Computation, v.12 n.7, p.1485-1517, July 2000
A. Srivastava , A. B. Lee , E. P. Simoncelli , S.-C. Zhu, On Advances in Statistical Modeling of Natural Images, Journal of Mathematical Imaging and Vision, v.18 n.1, p.17-33, January | clutter modeling;reaction-diffusion;anisotropic diffusion;gibbs distribution;image restoration;texture synthesis;visual learning |
271577 | Proximal Minimization Methods with Generalized Bregman Functions. | We consider methods for minimizing a convex function f that generate a sequence {xk} by taking xk+1 to be an approximate minimizer of f(x)+Dh(x,xk)/ck, where ck > 0 and Dh is the D-function of a Bregman function h. Extensions are made to B-functions that generalize Bregman functions and cover more applications. Convergence is established under criteria amenable to implementation. Applications are made to nonquadratic multiplier methods for nonlinear programs. | Introduction
We consider the convex minimization problem
is a closed proper convex function and X is a nonempty closed
convex set in IR n . One method for solving (1.1) is the proximal point algorithm (PPA)
[Mar70, Roc76b] which generates a sequence
starting from any point x is the Euclidean norm and fc k g is a sequence
of positive numbers. The convergence and applications of the PPA are discussed, e.g., in
[Aus86, CoL93, EcB92, GoT89, G-ul91, Lem89, Roc76a, Roc76b].
Several proposals have been made for replacing the quadratic term in (1.2) with other
distance-like functions [BeT94, CeZ92, ChT93, Eck93, Egg90, Ius95, IuT93, Teb92, TsB93].
In [CeZ92], (1.2) is replaced by
Research supported by the State Committee for Scientific Research under Grant 8S50502206.
Systems Research Institute, Newelska 6, 01-447 Warsaw, Poland (kiwiel@ibspan.waw.pl)
where D h (x; is the D-function of a Bregman function
h [Bre67, CeL81], which is continuous, strictly convex and differentiable in the interior of
its domain (see x2 for a full definition); here h\Delta; \Deltai is the usual inner product and rh is
the gradient of h. Accordingly, this is called Bregman proximal minimization (BPM). The
convergence of the BPM method is discussed in [CeZ92, ChT93, Eck93, Ius95, TsB93], a
generalization for finding zeros of monotone operators is given in [Eck93], and applications
to convex programming are presented in [Cha94, Eck93, Ius95, NiZ92, NiZ93a, NiZ93b,
This paper discusses convergence of the BPM method using the B-functions of [Kiw94]
that generalize Bregman functions, being possibly nondifferentiable and infinite on the
boundary of their domains (cf. x2). Then (1.3) involves D k
, where fl k is a subgradient of h at x k . We establish for the first time convergence
of versions of the BPM method that relax the requirement for exact minimization
in (1.3). (The alternative approach of [Fl-a94], being restricted to Bregman functions with
Lipschitz continuous gradients, cannot handle the applications of xx7-9.) We note that in
several important applications, strictly convex problems of the form (1.3) may be solved
by dual ascent methods; cf. references in [Kiw94, Tse90].
The application of the BPM method to the dual functional of a convex program yields
nonquadratic multiplier methods [Eck93, Teb92]. By allowing h to have singularities, we
extend this class of methods to include, e.g., shifted Frish and Carroll barrier function
methods [FiM68]. We show that our criteria for inexact minimization can be implemented
similarly as in the nonquadratic multiplier methods of [Ber82, Chap. 5]. Our convergence
results extend those in [Eck93, TsB93] to quite general shifted penalty functions, including
twice continuously differentiable ones.
We add that the continuing interest in nonquadratic modified Lagrangians stems from
the fact that, in contrast with the quadratic one, they are twice continuously differentiable,
and this facilitates their minimization [Ber82, BTYZ92, BrS93, BrS94, CGT92, CGT94,
GoT89, IST94, JeP94, Kiw96, NPS94, Pol92, PoT94, Teb92, TsB93]. By the way, our
convergence results seem stronger than ones in [IST94, PoT94] for modified barrier func-
tions, resulting from a dual application of (1.3) with D k
replaced by an entropy-like
OE-divergence.
The paper is organized as follows. In x2 we recall the definitions of B-functions and
Bregman functions and state their elementary properties. In x3 we present an inexact
BPM method. Its global convergence under various conditions is established in xx4-5.
In x6 we show that the exact BPM method converges finitely when (1.1) enjoys a sharp
minimum property. Applications to multiplier methods are given in x7. Convergence of
general multiplier methods is studied in x8, while x9 focuses on two classes of shifted
penalty methods. Additional aspects of multiplier methods are discussed in x10. The
Appendix contains proofs of certain technical results.
Our notation and terminology mostly follow [Roc70]. IR m
? are the nonnegative
and positive orthants of IR m respectively. For any set C in IR n , cl C, -
ri C and bd C
denote the closure, interior, relative interior and boundary of C respectively. ffi C is the
indicator function of C (ffi C xi is the
support function of C. For any closed proper convex function f on IR n and x in its effective
domain is the ffl-
subdifferential of f at x for each ffl - 0, is the ordinary subdifferential of f
at x and f 0 denotes the derivative of f in any direction
. By [Roc70, Thms 23.1-23.2], f 0 (x; d) - \Gammaf 0 (x; \Gammad) and
The domain and range of @f are denoted by C @f and im@f respectively. By [Roc70, Thm
ri C f ae C @f ae C f . f is called cofinite when its conjugate f
is real-valued. A proper convex function f is called essentially smooth if -
differentiable on -
C f . If f is closed
proper convex, its recession function f0
positively homogeneous [Roc70, Thm 8.5].
B-functions
We first recall the definitions of B-functions [Kiw94] and of Bregman functions [CeL81].
For any convex function h on IR n , we define its difference functions
By convexity (cf. (1.4)), h(x) - h(y)
h and D ]
h generalize the usual D-function of h [Bre67, CeL81], defined by
since
Definition 2.1. A closed proper (possibly nondifferentiable) convex function h is called
a B-function (generalized Bregman function) if
(a) h is strictly convex on C h .
(b) h is continuous on C h .
(c) For every ff 2 IR and x 2 C h , the set L 1
(d) For every ff 2 IR and x 2 C h , if fy k g ae L 1
h (x; ff) is a convergent sequence with limit
Definition 2.2. Let S be a nonempty open convex set in IR n . Then
cl S, is called a Bregman function with zone S, denoted by h 2 B(S), if
(i) h is continuously differentiable on S.
(ii) h is strictly convex on -
S.
(iii) h is continuous on -
S.
(iv) For every ff 2 IR, ~
S, the sets L 2
fy are bounded.
(v) If fy k g ae S is a convergent sequence with limit y , then D h (y
(vi) If fy k g ae S converges to y , fx k g ae -
S is bounded and D h
(Note that the extension e of h to IR n , defined by
is a B-function with C
ri C
e (\Delta;
e (\Delta;
h and D ]
h are used like distances, because for
and D [
strict convexity. Definition 2.2 (due
to [CeL81]), which requires that h be finite-valued on -
S, does not cover Burg's entropy
[CDPI91]. Our Definition 2.1 captures features of h essential for algorithmic purposes. As
shown in [Kiw94], condition (b) implies (c) if h is cofinite. Sometimes one may verify the
following stronger version of condition (d)
C @h oe fy k
by using the following three lemmas proven in [Kiw94].
Lemma 2.3. (a) Let h be a closed proper convex function on IR n , and let S 6= ; be a
compact subset of ri C h . Then there exists ff 2 IR s.t. joe @h(y)
(b) Let is the indicator function of a convex polyhedral set S 6= ; in IR n .
Then h satisfies condition (2.5).
(c) Let h be a proper polyhedral convex function on IR n . Then h satisfies condition (2.5).
(d) Let h be a closed proper convex function on IR. Then h is continuous on C h , and
fy k g ae C h .
Lemma 2.4. (a) Let
are closed proper convex functions
s.t. h are polyhedral and " j
condition (c) of Def. 2.1, then so does h. If condition (d) of Def.
2.1 or (2.5), then so does h. If h 1 is a B-function, h are continuous on C
and satisfy condition (d) of Def. 2.1, then h is a B-function. In particular, h
is a B-function if so are h
(b) Let h be B-functions s.t. " j
ri C h i 6= ;. Then
function.
(c) Let h 1 be a B-function and let h 2 be a closed proper convex function s.t. C h 1
ae ri C h 2
is a B-function.
(d) Let h closed proper strictly convex functions on IR s.t. L 1
(t; ff) is bounded
for any t; ff 2 IR,
Lemma 2.5. Let h be a proper convex function on IR. Then L 1
h (x; ff) is bounded for each
Lemma 2.6. (a) If / is a B-function on IR then / is essentially smooth and C /
(b) If OE closed proper convex essentially smooth and C
C OE then OE
is a B-function with ri C OE ae imrOE ae C OE .
Proof. (a): This follows from Def. 2.1, Lem. 2.5 and [Roc70, Thm 26.3]. (b): By [Roc70,
Thms 23.4, 23.5 and 26.1], ri C OE ae C @OE strictly convex
on C @OE , and hence on C OE by an elementary argument. Since OE is closed proper convex
and OE Thm 12.2], the conclusion follows from Lems. 2.3(d) and 2.5.
Examples 2.7. Let
In each of the examples,
it can be verified that h is an essentially smooth B-function.
ff =ff. Then h
2.
ff =ff if
p. 106].
3 ('x log x'-entropy) [Bre67].
Kullback-Liebler entropy).
[Roc70, p. 105] and D h is the Kullback-Liebler entropy.
(1\Gammay 2
6 (Burg's entropy) [CDPI91].
3 The BPM method
We make the following standing assumptions about problem (1.1) and the algorithm.
Assumption 3.1. (i) f is a closed proper convex function.
(ii) X is a nonempty closed convex set.
(iii) h is a (possibly nonsmooth) B-function.
is the essential objective of (1.1).
is a sequence of positive numbers satisfying
is a sequence of nonnegative numbers satisfying lim l!1
Consider the following inexact BPM method. At iteration k - 1, having
find x k+1 , fl k+1 and p k+1 satisfying
We note that x k+1 - arg minffX +D k
with
by (2.1), (2.2), (3.2) and (3.3); in fact x k+1 is an ffl k -minimizer of
as shown after the following (well-known) technical result (cf. [Roc70, Thm 27.1]).
Lemma 3.2. A closed proper and strictly convex function OE on IR n has a unique minimizer
iff OE is inf-compact, i.e., the ff-level set L OE is bounded for any ff 2 IR,
and this holds iff L OE (ff) is nonempty and bounded for one ff 2 IR.
Proof. If x 2 Arg min OE then, by strict convexity of OE, L OE
is inf-compact (cf. [Roc70, Cor. 8.7.1]). If for some ff 2 IR, L OE (ff) 6= ; is bounded then it
is closed (cf. [Roc70, Thm 7.1]) and contains Arg min OE 6= ; because OE is closed.
Lemma 3.3. Under the above assumptions, we have:
(i) OE k is closed proper and strictly convex.
is cofinite. In particular, OE k is inf-compact if (fl ri C f
(v) If OE k is inf-compact and either ri C f X
ri C h 6= ;, or C f X
ri C h 6= ; and fX is
polyhedral, then there exist - x
if C @f X ae -
C h or C
essentially smooth.
(vi) The assumptions of (v) hold if either ri C f X
ae -
ae -
and
Proof. (i) Since f , ffi X and h are closed proper convex, so are
h (\Delta; x k )=c k (cf. [Roc70, Thm 9.3]), having nonempty domains C f " X, C h and
respectively (cf. Assumption 3.1(iv)). D k
are strictly convex, since
so is h (cf. Def. 2.1(a)).
(ii) For any x, add the inequality D k
(cf.
(3.3), (3.4)) divided by c k to fX (x) - fX
(cf. (3.6)) and use
(3.5) to get OE k (x) - OE k
(iii) By part (i),
closed proper strictly convex, and L / by
strict convexity of h (cf. Def. 2.1(a), (2.2) and (1.4)), so / is inf-compact (cf. Lem. 3.2).
(cf. (3.9)). The last set is bounded, since / is inf-compact, so OE k is inf-compact by part
(i) and Lem. 3.2.
~
closed proper and strictly convex (so is D k
cf. part (i)), and ~
Thm 23.8]). Hence ~
/ is inf-compact (cf. Lem. 3.2), and so is OE k , since OE k - ~
/ from
yi. To see that strict convexity of h (cf. Def. 2.1(a)) implies
C h , we note that -
Thms 26.3 and 26.1], and @h
by [Roc70, Thm 23.5], so that C @h
is cofinite. The second assertion follows from ri C f
ae C @f
(v) By part (i) and Lem. 3.2, - x defined. The rest follows from
(cf. (3.8)), the fact due to
our assumptions on C f X
and ri C h (cf. [Roc70, Thm 23.8]), and [Roc70, Thm 26.1].
(vi) If inf is inf-compact by parts (iii)-(iv). If
ri
ri ri C ri C f X 6= ;, since C f X Assumption 3.1(iv)).
Remark 3.4. Lemma 3.3(v,vi) states conditions under which the exact BPM method
(with x
in (3.6)) is well defined. Our conditions are
slightly weaker than those in [Eck93, Thm 5], which correspond to ri C f X
ae -
cl
being finite, continuous and bounded below on X.
Example 3.5. Let
implies that -
x k+1 is well defined.
Example 3.6. Let
ri C f
ri C f " -
const for c Although h is not a
Bregman function, this is a counterexample to [Teb92, Thm 3.1].
4 Convergence of the BPM method
We first derive a global convergence rate estimate for the BPM method. We follow the
analysis of [ChT93], which generalized that in [G-ul91]. Let s
Lemma 4.1. For all x 2 C h and k - l, we have
l
l
l
Proof. The equality in (4.1) follows from (3.3), and the inequality from
(cf. (3.5)) and p k+1 2 @ ffl k fX
since c k ? 0. (4.2) is a consequence of (4.1). Summing (4.1) over l we obtain
l
l
l
Use fX in (4.5) to get (4.3). (4.4) follows from (4.3)
and the fact D k
We shall need the following two results proven in [TsB91].
Lemma 4.2 ([TsB91, Lem. 1]). Let h : be a closed proper convex function
continuous on C h . Then:
(a) For any y 2 C h , there exists ffl ? 0 s.t. closed.
(b) For any y 2 C h and z s.t. y any sequences y k ! y and z k ! z s.t.
Lemma 4.3. Let h : be a closed proper convex function continuous on
C h . If fy k g ae C h is a bounded sequence s.t., for some y 2 C h , fh(y k
bounded from below, then fh(y k )g is bounded and any limit point of fy k g is in C h .
Proof. Use the final paragraph of the proof of [TsB91, Lem. 2].
Lemmas 4.2-4.3 could be expressed in terms of the following analogue of (2.1)
Lemma 4.4. Let h : be a closed proper strictly convex function continuous
on C h . If y 2 C h and fy k g is a bounded sequence in C h s.t. D 0
Proof. Let y 1 be the limit of a subsequence fy k g k2K . Since h(y k
h(y
by Lem. 4.3 and h(y k ) K
\Gamma! h(y 1 ) by continuity of h
on C h . Then by Lem. 4.2(b),
yields strict convexity of h. Hence
By (1.4), (3.2), (3.3), (2.2) and (4.6), for all k
Lemma 4.5. If
is bounded and fx k g ae L 1
(ii) Every limit point of fx k g is in C h .
converges to some x
Proof. (i) We have D l
ae C @h (cf. (3.1)), so fx k g ae L 1
h (x; ff), a bounded set (cf. Def. 2.1(c)).
(4.6), (4.7)), so the desired conclusion follows from continuity of h on C h (cf. Def. 2.1(b)),
being bounded in C h (cf. (3.1) and part (i)) and Lem. 4.3.
(iii) By parts (i)-(ii), a subsequence fx l j g converges to some x
But fX and fX is
closed (cf. Assumption 3.1(i,ii)). Hence for l ? l j , D l
(cf. (4.2)) with
Finally, if x does
not converge, it has a limit point x 0 6= x 1 (cf. parts (i)-(ii)), and replacing x and x 1 by
respectively in the preceding argument yields a contradiction.
We may now prove our main result for the inexact BPM descent method (3.1)-(3.7).
Theorem 4.6. Suppose Assumption 3.1(i-ii,iv-v) holds with h closed proper convex.
(a) If lim l!1
Hence
ae C h . If ri C h " ri C f X
cl C h fX . If ri C f X
ae cl C h (e.g., C @f X
ae cl C h ) then
cl C h oe cl C f X
and Arg min X f ae cl C h .
(b) If h is a B-function, fX
fX is
nonempty then fx k g converges to some x
(c) If fX
Proof. (a) For any x 2 C h , taking the limit in (4.4) yields lim l!1 fX using
Assumption 3.1(v)) and
Hence fX
7.3.2]). If ri C h "
ri
(cf. [Roc70, Thm 6.5]) and inf C h cl C h fX , so
cl C h fX . If ri C f X
ae cl C h then cl C f X
ae cl C h (cf. [Roc70, Thm 6.5]).
(b) If x 2 X then fX
(c) If jx k j 6! 1, fx k g has a limit point x with fX (x) - inf C h fX / fX
closed; cf. Assumption 3.1(i,ii)), so C f
Remark 4.7. For the exact BPM method (with ffl k j 0), Thm 4.6(a,b) subsumes [ChT93,
Thm 3.4], which assumes ri C f X
ae -
C h and C cl C h . Thm 4.6(b,c) strengthens [Eck93,
Thm 5], which only shows that fx k g is unbounded if cl C f X
ae -
and Lem. 3.3 subsume [Ius95, Thm 4.1], which assumes that h is essentially smooth, f is
continuous on C f ,
cl C h , Arg min X f
For choosing fffl k g (cf. Assumption 3.1(vi)), one may use the following simple result.
Lemma 4.8. (i) If ffl k ! 0 then
(ii) If
Proof. (i) For any ffl ? 0, pick - k and - l ? - k s.t. ffl k - ffl for all k - k and
for all l - l; then
(ii) We have
5 Convergence of a nondescent BPM method
In certain applications (cf. x7) it may be difficult to satisfy the descent requirement (3.7).
Hence we now consider a nondescent BPM method, in which (3.7) is replaced by
By Lem. 3.3(ii), (5.1) holds automatically, since it means OE k
Lemma 5.1. For all x 2 C h and k - l, we have
l
l
l
Proof. (4.1)-(4.2) still hold. (5.2) follows from D k
. Multiplying this inequality by s
and summing over
l
l
l
s
Subtract (5.5) from (4.5) and rearrange, using s to get (5.3). (5.4) follows
from (5.3) and the fact D k
Theorem 5.2. Suppose Assumption 3.1(i-ii,iv-v) holds with h closed proper convex.
(a) If
5.3 for sufficient conditions), then fX
Hence the assertions of Theorem 4.6(a) hold.
(b) If h is a B-function, fX
fX is
nonempty then fx k g converges to some x
ae C h .
(c) If fX
ae C h and X
Proof. (a) The upper limit in (5.4) for any x 2 C h yields lim sup l!1 fX
using
(b) If x 2 X then fX Assertions
(i)-(iii) of Lem. 4.5 still hold, since the proofs of (i)-(ii) remain valid, whereas in the proof
of (iii) we have x
and fX
(c) Use the proof of Thm 4.6(c).
Lemma 5.3. (i) Let fff k g, ffi k g and f" k g be sequences in IR s.t. 0 - ff
(ii) If
Proof. (i) See, e.g., [Pol83, Lem. 2.2.3].
(ii) Use part (i) with ff l =
(iii) Use part (ii) with c l =s l 2 [c min =lc
6 Finite termination for sharp minima
We now extend to the exact BPM method the finite convergence property of the PPA in
the case of sharp minima (cf. [Fer91, Roc76b] and [BuF93]).
Theorem 6.1. Let f have a sharp minimum on X, i.e., X there
exists
x. Consider the exact BPM
method applied to (1.1) with a B-function h s.t. C f X
ae Crh , ffl k j 0 and inf k c k ? 0. Then
there exists k s.t.
Proof. By Thm 4.6, x
continuity of rh on Crh [Roc70, Thm 25.5]) and @fX
(cf. (3.5)-(3.6)). But if
for
Hence for some k, jp
We note that piecewise linear programs have sharp minima, if any (cf. [Ber82, x5.4]).
7 Inexact multiplier methods
Following [Eck93, Teb92], this section considers the application of the BPM method to
dual formulations of convex programs of the form presented in [Roc70, x28]:
minimize f(x); subject to g i (x) - 0;
under the following
Assumption 7.1. f , are closed proper convex functions on IR n with C f ae
and ri C f ae
ri C g i
Letting we define the Lagrangian of (7.1)
and the dual functional
. Assume that
-. The dual problem to (7.1) is to maximize d, or equivalently to
minimize q(-) over - 0, where \Gammad is a closed proper convex function. We will apply
the BPM method to this problem, using some B-function h on IR m .
We assume that IR m
is a B-function (cf. Lem. 2.4(a)). The
monotone conjugate of h (cf. [Roc70, p. 111]) defined by h
nondecreasing (i.e., h coincides
with the convex conjugate h
of h+ , since h
(\Delta). We need
the following variation on [Eck93, Lem. A3]. Its proof is given in the Appendix.
Lemma 7.2. If h is a closed proper essentially strictly convex function on IR m with
ri C h 6= ;, then h + is closed proper convex and essentially smooth, @h
all is continuous on C @h
where I is the identity operator and N IR m
is the normal cone operator of IR m
i.e.,
additionally
im@h oe IR m
? then h+ is cofinite, C continuously differentiable.
, to find inf 0 q(-) via the BPM method we replace in (3.1)-
our inexact multiplier method requires finding - k+1 and x k+1 s.t.
with
for some p k+1 and fl k+1 . Note that (7.2) implies
since
\Delta; g(x k+1 )
\Gammag(x
and
(7.6), (7.4)-(7.5) hold if we take p
since then
Using (7.3) and (@h+ (Lem. 7.2), we have
so we may take ~ fl choices will be discussed later.
Further insight may be gained as follows. Rewrite (7.3) as
where
Let
cf. Assumption 7.1), L k
Lemma 7.3. Suppose inf C f
e.g., the feasible set C
of (7.1) is nonempty. Then L k is a proper convex function and
If -
(g(-x)). The preceding assertions
hold when inf C f
(cf. Lem. 7.2).
Proof. Using
? and ~
u. Then, since P k is nondecreasing (so is
ri C f ae
ri C g i (cf. Assumption 7.1), Lem. A.1 in the Appendix yields im@P k ae IR m
and
(7.13), using @P (cf. Lem. 7.2). Hence if @L k (x)
so ri C f ae
ri C g i
implies (cf. [Roc70, Thm 23.8]) @ x L(x;
-). Finally, when C then for any ~ x 2 C f we may pick ~
with
u, since C f ae
(Assumption 7.1) and C
The exact multiplier method of [Eck93, Thm 7] takes x
assuming h is smooth, -
? and imrh oe IR m
? . Then (7.2) holds with
Our inexact method only requires that x k+1 ~
in the
sense that (7.2) holds for a given ffl k - 0. Thus we have derived the following
Algorithm 7.4. At iteration k - 1, having
ae
oe
s.t. (7.2) holds, choose fl k+1 satisfying (7.7) and set p
To find x k+1 as in [Ber82, x5.3], suppose f is strongly convex, i.e., for some - ff ? 0
Adding subgradient inequalities of g i , using (7.14) yields for all x
7.3). Minimization in (7.16) yields
so (7.2) holds if
Thus, as in the multiplier methods of [Ber82, x5.3], one may use any algorithm for minimizing
L k that generates a sequence fz j g such that lim inf j!1 setting
ff is unknown, it may be replaced in (7.18) by any fixed
ff ? 0; this only scales fffl k g.) Of course, the strong convexity assumption is not necessary
if one can employ the direct criterion (7.2), i.e., L(z
(cf. (7.10)), where d(-) may be computed with an error that can be absorbed in ffl k .
Some examples are now in order.
Example 7.5. Suppose
are B-functions on IR with C h i
oe
2.4(d)). For each i, let -
so that (cf. [Eck93, Ex. 6]) h
Using (7.9) and "maximal" fl k+1 in (7.7), Alg. 7.4 may be written as
Remark 7.6. To justify (7.19c), note that if we had
would
not penalize constraint violations
An ordinary penalty method
(cf. [Ber82, p. 354]) would use (7.19a,b) with
u and c k " 1. Thus (7.19) is a shifted
penalty method, in which the shifts fl k should ensure convergence even for sup k c k ! 1,
thus avoiding the ill-conditioning of ordinary penalty methods.
Example 7.7. Suppose C @h "
, so that
from IR m
(cf. [Roc70, Thms 23.8 and 25.1]). Then we may use since the
maximal shift due to (7.9). Thus Alg. 7.4 becomes
ae
oe
In the separable case of Ex. 7.5, the formulae specialize to
Example 7.8. Let
a B-function on IR with Cr/ oe IR ? .
Using (7.7) and (7.9) as in Ex.
7.5, we may let fl k+1
m. Thus Alg. 7.4 becomes
Example 7.9. For
becomes
Even if f and all g i are smooth, for the objective of (7.21a) is, in general, only
once continuously differentiable. This is a well-known drawback of quadratic augmented
Lagrangians (cf. [Ber82, TsB93]). However, for we obtain a cubic multiplier method
[Kiw96] with a twice continuously differentiable objective.
Example 7.10 ([Eck93, Ex. 7]). For reduces to
i.e., to an inexact exponential multiplier method (cf. [Ber82, x5.1.2], [TsB93]).
Example 7.11. For reduces to
i.e., to an inexact shifted logarithm barrier method (which was also derived heuristically
in [Cha94, Ex. 4.2]). This method is related, but not indentical, to ones in [CGT92,
GMSW88]; cf. [CGT94].
Example 7.12. If reduces to
corresponds to a shifted Carroll barier method.
8 Convergence of multiplier methods
In addition to Assumption 7.1, we make the following standing assumptions.
Assumption 8.1. (i) h+ is a B-function s.t. C h+ oe IR m
(e.g., so is h; cf. Lem. 2.4(a)).
is a sequence of positive numbers s.t. s
Remark 8.2. Under Assumption 8.1, q is closed proper convex, -
cl C
cl
q. Hence
for the BPM method applied to the dual problem sup with a B-function h+ we
may invoke the results of xx3-6 (replacing f , X and h by q, IR m and h+ respectively).
Theorem 8.3. If
Proof. This follows from Rem. 8.2 and Thm 5.2, since C h+ "Arg maxd ae Arg
Theorem 8.4. Let Crh oe IR m
and if inf k c k ? 0 then
lim sup
d(-) and lim sup
and every limit point of fx k g solves (7.1). If
Proof. Since C h oe Crh oe IR m
, the assertions about f- k g follow from Thm 8.3. Suppose
7.7), we have (cf. Lem. 7.2)
and
is continuous on IR m
Hence
(cf. (7.2)) means f(x
any x, in the limit
some x 1 and K ae are closed),
so by weak duality, f(x 1
Remark 8.5. Let C denote the optimal solution set for (7.1). If (7.1) is consistent (i.e.,
is nonempty and compact iff f and g i ,
direction of recession [Ber82, x5.3], in which case (8.1) implies that fx k g is bounded, and
hence has limit points. In particular, if C in Thm 8.4.
Remark 8.6. Theorems 8.3-8.4 subsume [Eck93, Thm 7], which additionally requires
that ffl k j 0, imrh oe IR m
? and each g i is continuous on C f .
Theorem 8.7. Let (7.1) be s.t. \Gammad has a sharp minimum. Let Crh oe IR m
k. Then there exists k s.t.
Proof. Using the proof of Thm 6.1 with
the conclusion follows
from the proof of Thm 8.4.
Remark 8.8. Results on finite convergence of other multiplier methods are restricted to
only once continuously differentiable augmented Lagrangians [Ber82, x5.4], whereas Thm
8.7 covers Ex. 7.9 also with fi ? 2. Applications include polyhedral programs.
We shall need the following result, similar to ones in [Ber82, x5.3] and [TsB93].
Lemma 8.9. With u k+1 := g(x k+1 ), for each k, we have
Proof. As for (8.2), use (7.12), (7.3), (2.3) and convexity of h + to develop
yields (8.3), and (8.4) holds with
by the
convexity of h + . (8.5) follows from (8.2)-(8.4) and (7.2).
Theorem 8.4 only covers methods with Crh oe IR m
, such as Exs. 7.7 and 7.9. To handle
other examples in x9, we shall use the following abstraction of the ergodic framework of
[TsB93]. For each k, define the aggregate primal solution
Since g is convex and c j g(x j+1
Lemma 8.10. Suppose sup i;k fl k
Then
lim sup
If f-x k g has a limit point x 1 (e.g., C 6= ; is bounded; cf. Rem. 8.11), then x 1 solves
and each limit point of f- k g maximizes d.
Proof. By 1. By (8.6) and convexity of f ,
d 1 from (8.5), so
are closed). Hence by weak duality,
solves (7.1). Since d(- k and d is closed, each cluster of f- k g maximizes d.
Remark 8.11. If C 6= ; is bounded then (8.8) implies that f-x k g is bounded (cf. Rem.
8.5). In particular, if C
9 Classes of penalty functions
Examples 7.10-7.12 stem from B-functions of the form
B-function on IR s.t. Since may also be derived by
choosing suitable penalty functions OE on IR and letting 2.6). We now
define two classes of penalty functions and study their relations with B-functions.
Definition 9.1. We say OE closed proper convex essentially
smooth, -
t, t 0
strictly convexg and \Phi strictly convex on (t 0
\Gamma1g.
Remark 9.2. If OE 2 \Phi then OE is nondecreasing (imrOE ae
is increasing on (t 0
closed proper convex,
. (For the "if" part, note that rOE(t k
and rOE is nondecreasing.)
Lemma 9.3. If OE 2 \Phi then OE is a B-function with
lim
. If OE 2 \Phi s then OE is
essentially smooth, C @OE
Proof. By Def. 9.1 and Lem. 2.6, IR ? ae imrOE ae IR + and OE is a B-function with
ri C OE ae imrOE ae C OE , so
and rOE is nondecreasing, lim
Since OE is closed and proper, OE0
[Roc70, Thm 13.3] with oe C OE
cl C OE
and
cl C OE
and closedness of OE; otherwise lim t"t OE
[Roc70, Thm 8.5]. By [Roc70, Thm 26.1],
C OE . If OE 2 \Phi s then OE
is essentially smooth [Roc70, Thm 26.3], so @OE and CrOE
[Roc70, Thm 26.1]. If OE 2 \Phi 0 then @OE
is increasing on (t 0
is single-valued on IR
and hence @OE
Lemma 9.4. Let / be a B-function on IR s.t. C / oe IR ? . Then
Cr/ oe IR ? . If
then /+ is essentially smooth
there exists a B-function -
Proof.
is a B-function (Lem. 2.4(a)) and
2.6(a)). Also / + is nondecreasing and essentially smooth (Lem. 7.2), so imr/
By strict convexity of / (cf. Def. 2.1(a)), is increasing on IR ? , so r/
is increasing on (t strictly
convex on (t
essentially smooth [Roc70, Thm 26.3]. Otherwise, t
be a strictly convex quadratic function s.t. -
Corollary 9.5. If OE 2 \Phi 0 then the method of Ex. 7.8 with coincides with the
method of Ex. 7.7 with
/ is the smooth extension of / described
in Lem. 9.4, so that Crh oe IR m
and Thms 8.4 and 8.7 apply.
Proof. We have
OE
so
OE and / 0 (t;
Remark 9.6. In terms of OE 2 \Phi 0 , the method of Ex. 7.8 with
OE
where OE 0
In view of Cor. 9.5, we restrict attention to methods generated by OE 2 \Phi s .
Example 9.7. Choosing OE 2 \Phi s and in Ex. 7.8 yields the method
OE
with
and /
(rOE) \Gamma1 by Def. 9.1 and [Roc70, Thms 26.3 and 26.5].) Note that
The following results will ensure that
0, as required in Lem. 8.10.
Definition 9.8. We say OE 2 \Phi is forcing on [t 0
sequences ft 0
Lemma 9.9. If OE 2 \Phi s ,
then OE is forcing on [\Gamma1; t 00
Proof. Replace OE by OE \Gamma inf OE, so that inf positive and increasing
(cf. Rem. 9.2), so is OE. Let [OE
OE . If
\Gamma! 1.
\Gamma! 1.
Therefore,
Lemma 9.10. The following functions are forcing on [\Gamma1; t 00
Proof. Let
forcing. Invoke Lem. 9.9 for OE 1 and OE 3 .
Example 9.11. Let OE 2 \Phi s be s.t.
OE is not forcing on [\Gamma1; \Gamma1], although lim
Lemma 9.12. Consider Ex. 9.7 with OE 2 \Phi s , t
t and t
. Then
1). In general, t is bounded.
Proof. This follows from the facts - k
1 and strict monotonicity of rOE; cf. Rem. 9.2, Lem. 9.3 and Ex. 9.7.
Lemma 9.13. Suppose in Ex. 9.7 OE 2 \Phi s is forcing on (\Gamma1; t fl ] with t
Proof. Since rOE is nondecreasing and h
we deduce from (8.4) that
and [OE 0 (fl k
(cf. Ex. 9.7) yields sup i;k ffl k
so the preceding relation and the forcing
property of OE give - k
Theorem 9.14. Consider Ex. 9.7 with OE 2 \Phi s s.t. inf OE ? \Gamma1. Suppose Arg maxd 6= ;,
holds. If f-x k g has a limit point x 1 (e.g., C 6= ; is bounded; cf.
Rem. 8.11), then x 1 solves (7.1) and f(x 1
Proof. Let We have
, so the assertions about f- k g follow from Thm 8.3. Then t
by Lem. 9.12 (f- k g is bounded), so OE is forcing on [\Gamma1; t fl
9.13. The conclusion follows from Lem. 8.10.
Remark 9.15. For the exponential multiplier method (Ex. 7.10 with
8.3 and 9.14 subsume [TsB93, Prop. 3.1] (in which Arg maxd 6= ;, C 6= ; is bounded,
Theorem 9.16. Consider Ex. 9.7 with OE 2 \Phi s forcing on (\Gamma1; t OE ) 6= IR (e.g.,
holds. If f-x k g has a limit point x 1 (e.g., C 6= ; is bounded; cf. Rem. 8.11), then
and each limit point of f- k g maximizes d.
Proof. By Lem. 9.12, t
so OE is forcing on (\Gamma1; t fl ]. Since d(- k
9.13. Since t fl - t OE ! 1, the conclusion follows from Lem. 8.10.
Remark 9.17. Suppose
2.2.3]). If d duality. If
is bounded iff so is Arg maxd
This observation may be used in Lem. 8.10 and Thm 9.16.
Theorem 9.18. Consider Ex. 9.7 with OE 2 \Phi s s.t. inf OE ? \Gamma1. Suppose
some x 2 C f ,
holds.
If f-x k g has a limit point x 1 (e.g., C 6= ; is bounded; cf. Rem. 8.11), then x 1 solves
and each limit point of f- k g maximizes d. If d
Proof. Since and Arg maxd 6= ; are bounded (Rem.
9.17), we get, as in the proof of Thm 9.14, C
Hence
the first two assertions follow from Lem. 8.10, and the third one from Thm 8.3.
Theorem 9.19. Consider Ex. 9.7 with OE 2 \Phi s forcing on (\Gamma1; t 00
and holds. If f-x k g has a limit point x 1 (e.g., C 6= ; is bounded; cf. Rem. 8.11),
and each limit point of f- k g maximizes d. If
Proof. Use the proof of Thm 9.18, without asserting that C
.
Remark 9.20. It is easy to see that we may replace OE 2 \Phi s by OE 2 \Phi 0 and Ex. 9.7
by Ex. 7.8 with Lems. 9.9, 9.12, 9.13 and Thms 9.14, 9.16, 9.18, 9.19. (In
the proof of Lem. 9.9,
OE , since OE 0 and OE are positive and increasing on
proving Lem. 9.12, recall the proof of Cor. 9.5; in the proof of Lem. 9.13, use
Such results complement Thms 8.4 and 8.7; cf. Cor. 9.5.
Additional aspects of multiplier methods
Modified barrier functions can be extrapolated quadratically to facilitate their minimiza-
tion; cf. [BTYZ92, BrS93, BrS94, NPS94, PoT94]. We now extend such techniques to our
penalty functions, starting with a technical result.
Lemma 10.1. Let OE 1 ; OE 2 2 \Phi be s.t. for some t s 2 (t 0
is forcing on (\Gamma1; t s ] and OE 2 is forcing on
is forcing on (\Gamma1; t 00
Proof. Suppose
(other cases being
trivial). Since OE 0
1 and OE 0
are nondecreasing, so is OE 0 ; therefore, all terms in
are nonnegative and tend to zero. Thus OE 0
Hence t 0
yield the first assertion. For the second one, use Def. 9.1 and Rem. 9.2.
Examples 10.2. Using the notation of Lem. 10.1, we add the condition OE 00
to make OE twice continuously differentiable. In each example, OE 2 \Phi s [ \Phi 0 is forcing on
Lems. 9.9-9.10 and Rem. 9.20.
12ts
only grows as fast as OE 2 in Ex. 7.9 with but is smoother.
b. This OE does not grow as fast as e t in Ex. 7.10.
3 (log-quadratic).
This OE allows arbitrarily large infeasibilities, in contrast to OE 1 in Ex. 7.11.
Again, this OE has C in contrast to OE 1 in Ex. 7.12.
s
Remark 10.3. Other smooth penalty functions (e.g., cubic-log-quadratic) are easy to
derive. Such functions are covered by the various results of x9. Their properties, e.g.,
may also have practical significance; this should be verified experimentally.
The following result (inspired by [Ber82, Prop. 5.7]) shows that minimizing L k (cf.
in Alg. 7.4 is well posed under mild conditions (see the Appendix for its proof).
Lemma 10.4. Let
is a B-function with C / oe IR ? . Suppose
is nonempty and compact iff f and
have no common direction of recession, and if C 0 6= ; then this is equivalent to
having a nonempty and compact set of solutions.
We now consider a variant of condition (7.18), inspired by one in [Ber82, p. 328].
Lemma 10.5. Under the strong convexity assumption (7.15), consider (7.17) with
replacing (7.18), where
ff
Next, suppose
is bounded.
Proof. By (7.17) and (10.1), (10.2) holds with L(x
follows from (8.5). Similarly, L(x
ff yields L(x
ff
(Lem. 4.8(i)).
Remark 10.6. In view of Lem. 10.5, suppose in the strongly convex case of (7.15), (10.1)
is used with (cf. (10.3)), the results of xx8-9
may invoke, instead of Thm 5.2 with
The latter condition holds automatically if lim k!1 d(- k 1. Thus we
may drop the conditions:
Thms 8.3, 8.4, 9.14, ffl k ! 0 from Lem.
8.10 and Thm 9.16, and
Thms 9.18-9.19. Instead of
we may assume that fc k j k g is bounded in Thms 8.3, 8.4, 9.14 and 9.18-9.19.
Condition (10.1) can be implemented as in [Ber82, Prop. 5.7(b)].
Lemma 10.7. Suppose f is strongly convex, inf C f
is continuous on
C f . Consider iteration k of Ex. 7.5 with
is a B-function
s.t. Cr/ oe IR ? . If is not a Lagrange multiplier of (7.1), fz j g is a sequence
converging to -
satisfying the stopping criterion (10.1).
Proof. By Lemmas 9.3-9.4, Ex. 7.5 has -
u).
Then, as in (8.2),
Suppose (-x). By (10.5), (2.3) and convexity of h
m. Therefore, since OE is strictly convex on [t 0
with
OE (Def. 9.1), and fl k
OE , for each i, either fl k
h-
Combining this with
-) (Lem. 7.3), we see (cf. [Roc70, Thm 28.3]) that - k is a Lagrange
multiplier, a contradiction. Therefore, we must have strict inequality in (10.5). Since
the stopping criterion will be satisfied for sufficiently large j.
A
Appendix
Proof of Lemma 7.2. IR m
(cf. [Roc70, Thm 23.8]),
so
and h+ is essentially strictly convex (cf. [Roc70, p. 253]). Hence (cf.
is closed proper essentially smooth, so @h
by [Roc70, Thm 26.1] and rh + is continuous on -
by [Roc70,
Thm 25.5]. By [Roc70, Thm 23.5], @h
nondecreasing,
as the union of open sets. That
and ~
(-) then
and
and ~
Hence OE is inf-compact and
We need the following slightly sharpened version of [GoT89, Thm 1.5.4].
Lemma A.1 (subdifferential chain rule). Let f be proper convex functions on
ri C f i 6= ;. Let
OE be a
proper convex nondecreasing function on IR m s.t.
y for some ~
, and for each -
f
Proof. For any x 1
and hence
is convex. Since /(x) ? \Gamma1 for all x, / is proper. Let
. For any x, f(x) - f(-x)
yields
To prove the opposite inclusion, let - fl 2 @/(-x). Consider the convex program
By the monotonicity of OE and the definition of subdifferential, (-x; -
solves (A.2), which
satisfies Slater's condition (cf. f(~x) ! ~
y), so (cf. [Roc70, Cor. 28.2.1]) it has a Kuhn-Tucker
point -
xi 8x yields -
ri C f i
Thm 23.8]). Thus @/(-x) ae Q, i.e., To see that im@OE ae IR m
, note that if
Proof of Lemma 10.4. Let OE i
m. Each OE i is closed: for any ff 2 IR,
is closed nondecreasing and lim t"t /
is closed (so is g i ). We have
and OE i closed proper and L k 6j 1, so L k is closed and L k
Thm 9.3]. Suppose Lem. 9.4 and Def.
9.1) and g i is closed, there is x 2 ri C
Hence
(cf. Lemmas 9.3-9.4) yield
ff
t. Then
from
. Thus OE
Therefore, otherwise. The
proof may be finished as in [Ber82, x5.3].
--R
Numerical methods for nondifferentiable convex optimization
New York
Partial proximal minimization algorithms for convex program- ming
The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming
Computational experience with modified log-barrier methods for nonlinear programming
Penalty/barrier multiplier methods for minimax and constrained smooth convex programs
Weak sharp minima in mathematical programming
Optimization of Burg's entropy over linear constraints
An iterative row action method for interval convex programming
Proximal minimization algorithm with D-functions
A globally convergent Lagrangian barrier algorithm for optimization with general inequality constraints and simple bounds
Nouvelles m'ethodes s'equentielles et parall'eles pour l'optimisation de r'eseaux 'a co-ots lin'eaires et convexes
Convergence analysis of a proximal-like minimization algorithm using Bregman functions
Convergence of some algorithms for convex minimization
On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators
Nonlinear proximal point algorithms using Bregman functions
Multiplicative iterative algorithms for convex programming
Finite termination of the proximal point algorithm
Sequential Unconstrained Minimization Techniques
Equilibrium programming using proximal-like algorithms
Shifted barrier methods for linear programming
Theory and Optimization Methods
On the convergence of the proximal point algorithm for convex minimization
On some properties of generalized proximal point methods for quadratic and linear programming
On the convergence rate of entropic proximal optimization algorithms
The convergence of a modified barrier method for convex programming
The proximal algorithm
R'egularisation d'in'equations variationelles par approximations successives
Massively parallel algorithms for singly constrained convex programs
A numerical comparison of barrier and modified barrier methods for large-scale bound-constrained optimization
English transl.
Nonlinear rescaling and proximal-like methods in convex opti- mization
Convex Analysis
Entropic proximal mappings with applications to nonlinear programming
Relaxation methods for problems with strictly convex costs and linear constraints
Dual ascent methods for problems with strictly convex costs and linear constraints: A unified approach
--TR
--CTR
Lin He , Martin Burger , Stanley J. Osher, Iterative Total Variation Regularization with Non-Quadratic Fidelity, Journal of Mathematical Imaging and Vision, v.26 n.1-2, p.167-184, November 2006
Hatem Ben Amor , Jacques Desrosiers, A proximal trust-region algorithm for column generation stabilization, Computers and Operations Research, v.33 n.4, p.910-927, April 2006
G. Birgin , R. A. Castillo , J. M. Martnez, Numerical Comparison of Augmented Lagrangian Algorithms for Nonconvex Problems, Computational Optimization and Applications, v.31 n.1, p.31-55, May 2005
Krzysztof C. Kiwiel , P. O. Lindberg , Andreas Nu, Bregman Proximal Relaxation of Large-Scale 01 Problems, Computational Optimization and Applications, v.15 n.1, p.33-44, Jan. 2000
A. B. Juditsky , A. V. Nazin , A. B. Tsybakov , N. Vayatis, Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging, Problems of Information Transmission, v.41 n.4, p.368-384, October 2005
N. H. Xiu , J. Z. Zhang, Local convergence analysis of projection-type algorithms: unified approach, Journal of Optimization Theory and Applications, v.115 n.1, p.211-230, October 2002
Naihua Xiu , Jianzhong Zhang, Some recent advances in projection-type methods for variational inequalities, Journal of Computational and Applied Mathematics, v.152 n.1-2, p.559-585, 1 March | convex programming;nondifferentiable optimization;b-functions;bregman functions;proximal methods |
271602 | A Cartesian Grid Projection Method for the Incompressible Euler Equations in Complex Geometries. | Many problems in fluid dynamics require the representation of complicated internal or external boundaries of the flow. Here we present a method for calculating time-dependent incompressible inviscid flow which combines a projection method with a "Cartesian grid" approach for representing geometry. In this approach, the body is represented as an interface embedded in a regular Cartesian mesh. The advection step is based on a Cartesian grid algorithm for compressible flow, in which the discretization of the body near the flow uses a volume-of-fluid representation. A redistribution procedure is used to eliminate time-step restrictions due to small cells where the boundary intersects the mesh. The projection step uses an approximate projection based on a Cartesian grid method for potential flow. The method incorporates knowledge of the body through volume and area fractions along with certain other integrals over the mixed cells. Convergence results are given for the projection itself and for the time-dependent algorithm in two dimensions. The method is also demonstrated on flow past a half-cylinder with vortex shedding. | Introduction
In this paper, we present a numerical method for solving the unsteady incompressible Euler
equations in domains with irregular boundaries. The underlying discretization method is
a projection method [22, 5]. Discretizations of the nonlinear convective terms and lagged
pressure gradient are first used to construct an approximate update to the velocity field;
the divergence constraint is subsequently imposed to define the velocity and pressure at
the new time. The irregular boundary is represented using the Cartesian mesh approach
[58], i.e. by intersecting the boundary with a uniform Cartesian grid, with irregular cells
appearing only adjacent to the boundary. The extension of the basic projection methodology
to the Cartesian grid setting exploits the separation of hyperbolic and elliptic terms of the
method in [5] to allow us to use previous work on discretization of hyperbolic and elliptic
PDE's on Cartesian grids. The treatment of the hyperbolic terms is based on algorithms
developed for gas dynamics, and closely follows the algorithm of Pember et al. [56, 57]. The
Cartesian grid projection uses the techniques developed by Young et al [72] for full potential
transonic flow to discretize the elliptic equation that is used to enforce the incompressibility
constraint. The overall design goals of the method are to be able to use the high-resolution
finite difference methods based on higher-order Godunov methods for the advective terms
in the presence of irregular boundaries that effectively add a potential flow component to
the solution.
Cartesian grid methods were first used by Purvis and Burkhalter [58] for solving the
equations of transonic potential flow; see also [71, 40, 62, 72]. Clarke et al. [23] extended
the methodology to steady compressible flow; see also [30, 29, 28, 52]. Zeeuw and Powell
[74], Coirier and Powell [24], Coirier [25], Melton et al. [51], and Aftosmis et al. [1] have
developed adaptive methods for the steady Euler and Navier-Stokes equations.
For time-dependent hyperbolic problems, the primary difficulty in using the Cartesian
grid approach lies in the treatment of the cells created by the intersection of the irregular
boundary with the uniform mesh. There are no restrictions on how the boundary intersects
the Cartesian grid (unlike the "stair step" approach which defines the body as aligned
with cell edges), and as a result cells with arbitrarily small volumes can be created. A
standard finite-volume approach using conservative differencing requires division by the
volume of each cell; this is unstable unless the time step is reduced proportionally to the
volume. Although in the projection method convective differencing is used for the hyperbolic
terms in the momentum equation, we base our methodology for incompressible flow on the
experience gained for compressible flow in the handling of small cells. (In addition, we will
wish to update other quantities conservatively.) The major issues, then, in designing such
a method are how to maintain accuracy, stability, and conservation in the irregular cells at
the fluid-body interface while using a time step restricted by CFL considerations using the
regular cell size alone. We refer to this as the "small cell problem."
Noh [54] did early work in this area in which he used both cell merging techniques
and redistribution. LeVeque [42, 43] and Berger and LeVeque [12] have developed explicit
methods which use the large time step approach developed by LeVeque [41] to overcome
the small cell problem. Berger and LeVeque [13, 14] have also studied approaches in which
the small cell problem is avoided by the use of a rotated difference scheme in the cells cut
by the fluid-body interface. Both the large time step and the rotated difference schemes
are globally second-order and better than first-order but less than second-order at the fluid-
body interface. Cell merging techniques were also used by Chiang et al. [20], Bayyuk et al.
[4], and Quirk [60, 59] to overcome the small cell problem. Results for this method suggest
it is globally second order accurate and first order at the boundary. Two other approaches
for unsteady compressible flow are based on flux-vector splitting (Choi and Grossman [21])
and state-vector splitting ( -
Oks-uzo-glu [55], Gooch and -
Oks-uzo-glu [31]), but neither of these
approaches avoids the small cell problem.
The advection step of the method presented here uses a different approach for handling
irregular cells based on ideas previously developed for shock tracking by Chern and Colella
[17] and Bell, Colella, and Welcome [8], and extended by Pember et al. [56, 57]. In this approach
the boundary is viewed as a "tracked front" in a regular Cartesian grid with the fluid
dynamics at the boundary governed by the boundary conditions of a stationary reflecting
wall. The basic integration scheme for hyperbolic problems consists of two steps. In the
first step, a reference state is computed using fluxes generated by a higher-order Godunov
method in which the fluid-body boundary is essentially "ignored". In the second step, a
correction is computed to the state in each irregular cell. A stable, but nonconservative,
portion of this correction is applied to the irregular cell. Conservation is then maintained by
a variation of the algebraic redistribution algorithm in [17] which distributes the remainder
of the correction to those regular and irregular cells that are immediate neighbors of the
irregular cell. This redistribution procedure allows the scheme to use time steps computed
from CFL considerations on the uniform grid alone. We adapt this scheme for the advection
step of the projection method.
The projection step requires solution of an elliptic equation on an irregular grid. The
finite-element-based projection developed by Almgren et al. [3] for a regular grid is extended
here to accommodate embedded boundaries using the techniques developed by Young et
al. [72]. The approach of Young et al. defines the approximating space using standard
bilinear elements on an enlarged domain that includes nodes of mixed cells lying outside
the domain. Quadratures in the weak form are restricted to the actual domain to define
the discretization. The resulting linear system can be readily solved with a straightforward
extension of multigrid.
A second-order Cartesian grid projection method has been developed recently by Tau
[67] for the incompressible Navier-Stokes equations. In this formulation velocities are defined
on a staggered grid (which is incompatible with the cell-centered form of the Godunov
method), and the tangential velocity must vanish at the boundaries (i.e. the formulation
is not extensible to the Euler equations). Comments and results in [67] indicate that the
accuracy of the method again is in general first-order at the boundary but globally second-
order. No mention is made of the small cell problem for advecting quantities other than
momentum.
In addition to the approach taken here, there are two other basic approaches to the
treatment of irregular domains: and locally body-fitted grids obtained from mesh map-
pings, and unstructured grids. For body-fitted structured or block-structured grids for
compressible flow, the literature is quite extensive. Typical examples include [37, 35, 16,
11, 27, 7, 65, 68, 9, 61, 73]. The primary objection to mapped grids is the difficulty in
generating them for complex geometries, particularly in three dimensions. There have been
a number of approaches to making mapped grids more flexible, such as using combinations
of of unstructured and structured grids [69, 70, 49, 64, 53, 36, 48], multiblock-structured
grids [2, 66, 63] and overlapping composite grids [10, 19, 34]. On the topic of unstructured
grids, we refer the reader to [39, 38, 45, 46, 50] for compressible flow. L-ohner et al [44] have
developed an unstructured finite element algorithm for high-Reynolds number and inviscid
incompressible flows.
In comparison with the other approaches discussed above, the advantages of the Cartesian
grid approach include not requiring special grid generation techniques to fit arbitrary
boundaries, the ability to use a standard time-stepping algorithm at all cells with no additional
work other than at cells at or near the boundary, and the ability to incorporate
efficient solvers for the elliptic equation. The most serious disadvantage is a loss of accuracy
at the boundary (numerical results show a reduction to first-order accuracy at the
boundary). The loss of second-order accuracy at the boundary indicates that the Cartesian
grid approach is not ideal for calculations designed to resolve viscous boundary layers, but
should be satisfactory for flows where the primary effect of the boundary is to introduce a
potential flow component to the velocity corresponding to the effect of the no-normal flow
condition at the boundary. Problems where this is the case include flows in large containers,
such as utility burners and boilers, and atmospheric flows over irregular orography.
In the next section we review the basic fractional step algorithm, and introduce the
notation of the Cartesian grid method. The subsequent two sections contain descriptions of
the advection step and the projection step, respectively, for flows with embedded boundaries.
In the final two sections we present numerical results and conclusions. All results and
detailed discussion will be for two spatial dimensions; the extension to three dimensions is
straightforward and will be presented in later work.
2 Basic Algorithm
2.1 Overview of Fractional Step Formulation
The incompressible Euler equations written in convective form are
and
r
Alternatively, (2.1) could be written in conservative form as
The projection method is a fractional step scheme for solving these equations, composed
of an advection step followed by a projection. In the advection step for cells entirely in the
flow domain we solve the discretization of (2.1),
\Deltat
for the intermediate velocity U : For small cells adjoining the body we modify the velocity
update using the conservative formulation of the nonlinear terms (see section 3). The
pressure gradient at t n\Gamma 1
was computed in the previous time step and is treated as a source
term in (2.4). The advection terms in (2.4), namely [(U \Delta r)U
2 , are approximated at
time
2 to second-order in space and time using an explicit predictor-corrector scheme;
their construction is described in section 3.
The velocity field U is not, in general, divergence-free. The projection step of the
algorithm decomposes the result of the first step into a discrete gradient of a scalar potential
and an approximately divergence-free vector field. They correspond to the update to the
pressure gradient and the update to the velocity, respectively. In particular, if P represents
the projection operator then
\Deltat
\Deltat
\Deltat
Note that the pressure gradient is defined at the same time as the time derivative of velocity,
and therefore at half-time levels.
2.2 Notation
We first introduce the notation used to describe how the body intersects the computational
domain. The volume fraction i;j for each cell is defined as the fraction of the computational
cell B i;j that is inside the flow domain. The area fractions a i+ 1
2 ;j and a i;j+ 1specify the
fractions of the (i
respectively, that lie inside the flow domain.
We label a cell entirely within the fluid ( as a full cell or fluid cell, a cell entirely
outside of the flow domain ( as a body cell, and a cell partially in the fluid (0 !
as a mixed cell. A small cell is a mixed cell with a small volume fraction. We note that it
is possible for the area fraction of an edge of a fluid cell to be less than one if the fluid cell
abuts a body or mixed cell.
In the method presented here, the state of the fluid at time t n is defined by U n
ij ); the velocity field in cell B i;j at time t n ; and p n\Gamma
the pressure at node (i\Gamma
2 . Gp n\Gamma 1ij is the pressure gradient in cell B i;j at time t
For the construction
of the nonlinear advective terms at t n+ 1
velocities are also defined on all edges of full and
mixed cells at t n+ 1
process requires values of the velocity and pressure gradients in the
cells on either side of an edge at t n : We must therefore define, at each time step, extended
states in the body cells adjoining mixed or full cells. We do this in a volume-weighted
fashion:
U ext
The pressure gradient is extended in the same manner. Here nbhd(B i;j ), defined as the
neighborhood of B i;j ; refers to the eight cells (in two dimensions) that share an edge or
corner with
3 Advection step
3.1 Momentum
The algorithm is a predictor-corrector method, similar to that used in [5], but with some
modifications as discussed in [6]. The details of the current version without geometry are
given in [3]. For simplicity we will assume that the normal velocity on the embedded
boundary is zero; the treatment of a more general Dirichlet boundary condition such as
inflow is straightforward.
In the predictor we extrapolate the velocity to the cell edges at t n+ 1
using a second-order
Taylor series expansion in space and time. The time derivative is replaced using (2.1).
For edge (i
U n;lim
extrapolated from B i;j , and
U n;lim
extrapolated from
Analogous formulae are used to predict values at each of the other edges of the cell. In
evaluating these terms the first-order derivatives normal to the edge (in this case U n;lim
x are
evaluated using a monotonicity-limited fourth-order slope approximation [26]. The limiting
is done on the components of the velocity individually, with modifications as discussed below
in cells near the body.
The transverse derivative terms ( d
vU y in this case) are evaluated as in [6], by first extrapolating
from above and below to construct edge states, using normal derivatives only,
and then choosing between these states using the upwinding procedure defined below. In
particular, we define
\Deltay
U n;lim
U n;lim
where U n;lim
y are limited slopes in the y direction, with similar formulae for the lower edge
of B i;j . In this upwinding procedure we first define the normal advective velocity on the
edge:
(We suppress the
spatial indices on bottom and top states here and in the next
equation.) We now upwind b
U based on b
U i;j+ 1=
After constructing b
similar manner, we use these upwind values to form the
transverse derivative in (3.1):
2\Deltay
(b v adv
U
We use a similar upwinding procedure to choose the appropriate states U i+ 1
given the left
and right states, U n+ 1;L
U
We follow a similar procedure to construct U i\Gamma 1
In general the normal velocities at the edges are not divergence-free. In order to make
these velocities divergence-free we apply a MAC projection [6] before the construction of
the convective derivatives. The equation
solved for OE in all full or mixed cells, with
a
a i;j+ 1v n+ 1i;j+ 1\Gamma a
\Deltax
\Deltay
Note that in regions of full cells the D MAC G MAC stencil is simply a standard five-point
cell-centered stencil for the Laplacian. For edges within the body that are needed for the
corrector step below, the MAC gradient is linearly extrapolated from edges that lie partially
or fully in the fluid. For example, if edge (i
lies completely within the body, but
edges
lie at least partially within the fluid, then we define
While it is possible this could lead to instabilities, none have yet been observed in numerical
testing. We solve this linear system using standard multigrid methods (see [15]), specifically,
V-cycles with Jacobi relaxation.
In the corrector step, we form an approximation to the convective derivatives in (2.1)
2\Deltay
(v MAC
where
The intermediate velocity U at time n+ 1 is then defined on all full and mixed cells as
The upwind method is an explicit difference scheme and, as such, requires a time-step
restriction. When all cells are full cells, a linear, constant-coefficient analysis shows that for
stability we require
\Deltay
where oe is the CFL number. The time-step restriction of the upwind method is used to set
the time step for the overall algorithm.
As mentioned earlier, for incompressible flow the two forms of the momentum equation,
and (2.3), are analytically equivalent. For flows without geometry we have successfully
used the convective difference form of the equations, (2.4). By contrast, we could use the
conservative form of the equations,
\Deltat +r \Delta F n+ 1
2 ); and evaluate the conservative update as
R
In the presence of embedded boundaries, the convective update form of the equations is stable
even for very small cells, but this update is calculated as if the cell were a full cell, i.e. it
doesn't ``see'' the body other than through the MAC-projected normal advection velocities
(and the modification of the limited slopes as discussed at the end of this section). To
compute the conservative update, however, one ignores entirely the portion of the cell not
in the fluid, and integrates the flux only along the parts of the edges of the cell that lie
within the fluid. Thus the presence of the body is correctly accounted for, but using the
conservative form requires that the time step approach zero as the cell volume goes to zero,
which is the small-cell time step restriction which we seek to avoid.
A solution in this case is to use a weighted average of the convective and conservative
updates, effectively allowing as much momentum to pass into a small cell in a time step
as will keep the scheme stable. The momentum that does not pass into the small cell is
redistributed to the neighboring cells, to maintain conservation in the advection step. This
approach is modeled on the algorithm of [57, 56], and is based on the algebraic redistribution
scheme of Chern and Colella [18]. The algorithm is as follows:
(1) First, in all cells construct the reference state, e
U , defined as
e
using the advective algorithm described for full cells.
(2) On mixed cells only construct an alternative e e
U
using a conservative
U
(a
(a i;j+ 1F y
\Deltay
where the fluxes are defined as
This solution enforces no-flow across the boundary of the body, but is not necessarily stable.
Define the difference ffiM
U
U
(3) The conservative solution can be written e e
U
U
; however, this solution is
not stable for - 1: Define instead b
U
U
ffiM i;j on mixed cells, b
U
U
i;j on full
cells. This allows the mixed cell state to keep the fraction of ffiM which will keep the scheme
stable given that the time step is set by the full-cell CFL constraint.
Now redistribute the remaining fraction of ffiM from each mixed cell, the amount
to the fluid and mixed cells among the eight neighbors of B i;j in a volume-weighted
fashion. Since we redistribute the extensive rather than intensive quantity (e.g.
momentum rather than momentum density), the resulting redistribution has the form
c c
U
U i;j
where
(5) Subtract the pressure gradient term from the solution for all full and mixed cells, treating
it as a source term:
U
A second modification to the algorithm for cells at or near the boundary is to the slope
calculations; this is based on the principle that no state information from cells entirely
within the body should be used in calculating slopes. Despite the use of extended states in
the creation of edge states, the slope calculation uses state information only from full and
mixed cells. In a mixed or full cell for which the fourth-order stencil would require using
an extended state, the slope calculation reduces to the second-order monotonicity-limited
formula. If the second-order formula would require an extended state, then the slope is set
to zero. This has the effect of adding dissipation to the scheme; if the slopes throughout
the domain were all set to zero the method would reduce to first-order.
Two comments about the algorithm are in order here. First, we note that while the use
of extended states seems to be a low-order approach to representing the body, numerical
results show that it is adequate to maintain first-order accuracy at the boundary. An
alternative to extended states is the so-called "thin wall approximation" of [57]; with this
approximation extended states are not defined, rather upwinding always chooses the values
coming from the fluid side (rather than the body side) of the fluid-body interface. In our
numerical testing, however, we have found that the extended states version of the algorithm
performs slightly better.
Second, we are unable to provide a proof that the redistribution algorithm is stable,
but in extensive numerical testing of the redistribution algorithm for compressible flow (see
[57, 32, 47]) no stability problems have been observed, and none have been observed in our
incompressible flow calculations. It is clear that the algorithm removes the original small-cell
stability problem by replacing division by arbitrarily small volume fractions with division
by one in step (3). The redistribution algorithm is such that the amount redistributed into
a cell is proportional to the volume of that cell; this feature combined with the observation
in our calculations that the values of ffiM are small relative to the values of U in the mixed
cells gives a heuristic argument as to why redistribution would be stable.
3.2 Passive Scalars
The algorithm for advecting momentum extends naturally to linear scalar advection. In
the third numerical example shown in this paper, a passive scalar enters the domain at
the inflow boundary and is advected around the body. We now briefly describe the scalar
advection routine, assuming the scalar s is a conserved quantity, i.e.
First, at each time step extended states are defined for the passive scalar just as for
velocity. In the predictor we extrapolate the scalar to the cell edges at t n+ 1
using a second-order
Taylor series expansion in space and time. For edge (i
\Deltat
extrapolated from B i;j , and
s n+ 1;R
\Deltat
extrapolated from
Analogous formulae are used to predict values at each of the other edges of the cell.
As with velocity, derivatives normal to the edge (in this case s n;lim
x are evaluated using
the monotonicity-limited fourth-order slope approximation [26], with modifications due to
geometry.
The transverse derivative terms (vc s y in this case) are defined using c
where
and
We use a similar procedure to choose s n+ 1i+ 1;j given s n+ 1
if u MAC
s n+ 1;R
if u MAC
In the corrector step for full cells, we update s conservatively:
s
\Deltat
\Deltax
\Deltat
\Deltay (v MAC
In the corrector step for mixed cells, we follow the three steps below:
(1) Construct the reference state e
s using the convective formulation:
\Deltat
\Deltat
2\Deltay (v MAC
(2) Construct e e s
using the conservative formulation:
(a i+ 1;j u MAC
\Deltax
(a i;j+ 1u MAC
\Deltay
Define the difference ffis
s i;j
Construct b s
s
Finally, redistribute the remaining fraction of ffis from each mixed cell, the amount
to the full and mixed cells among the eight neighbors of B i;j in a volume-weighted
fashion:
In full cells not adjacent to any mixed cells, set s n+1
Projection Step
Since U is defined on cell centers, the projection used to enforce incompressibility at time
must include a divergence operator that acts on cell-centered quantities, unlike the
MAC projection. The projection we use here is approximate; i.e. P 2 6= P: The operator, as
well as the motivation for using an approximate rather than exact projection, is described
in detail in [3]. Here we outline the algorithm for the case of no-flow physical boundaries
with no geometry, and for the case of embedded boundaries. The basic approach for the
embedded boundaries utilizes the same discretization as was used by Young et al [72] for
full potential transonic flow. The necessary modifications for inflow-outflow conditions are
described in [3] and their use is demonstrated in the numerical examples.
This projection is based on a finite element formulation. In particular, we consider the
scalar pressure field to be a C 0 function that is bilinear over each cell; i.e., the pressure is
in
s (x) is the space of polynomials of degree t in the x direction on each cell with C s
continuity at x-edges. For the velocity space we define
in x and a discontinuous linear function of y in each cell, with a similar form for v: For
mixed cells, we think of the representations as only being defined on the portion of each cell
within the flow
domain,\Omega ; although the spaces implicitly define an extension of the solution
over the entire cell. Note that in the integral used to define the projection (see (4.3) below)
the domain of integration is limited to the actual flow domain; we do not integrate over the
portion of mixed cells outside the domain.
For use in the predictor and corrector, the velocity and pressure gradient are considered
to be average values over each cell. The vector space, V h , contains additional functions that
represent the linear variation within each cell. These additional degrees of freedom make
large enough to contain rOE for OE 2 S h . We establish a correspondence between these
two representations by introducing an orthogonal decomposition of V h . In particular, for
each we define a piecewise constant component V and the variation V
so that for each cell B i;j ;
R
construction these two components are
orthogonal in L 2 so they can be used to define a decomposition of V h into two subspaces
represent the cell averages and the orthogonal linear variation, respec-
tively. The decomposition of V h induces a decomposition of rOE for all OE 2 S h ; namely,
We now define a weak form of the projection on V h , based on a weak divergence on V h .
In particular, we define a vector field V d in V h to be divergence-free in the
domain\Omega if
Z\Omega
Using the definition (4.2) we can then project any vector field V into a gradient rOE and
weakly divergence-free field V d (with vanishing normal velocities on boundaries) by solving
Z
Z
for
setting V Here we define the /'s to be
the standard basis functions for S h , namely / i+ 1;j+ 1(x) is the piecewise bilinear function
having node values / i+1=2;j+1=2
For the purposes of the fractional step scheme we wish to decompose the vector field
\Deltat
into its approximately divergence free-part
\Deltat
and the update to the pressure,
Since the finite-difference advection scheme is designed to handle cell-based quantities
that are considered to be average values over each cell, the quantity (GOE) ? is discarded at
the end of the projection step. This makes the projection approximate, i.e. if D(V
then in practice, we solve the system defined by (4.3) (again
using standard multigrid techniques with V-cycles and Jacobi relaxation), and set
Z
as the approximation to U n+1 \GammaU n
\Deltat in (2.5). (See [3] for more details.)
The left-hand-side of equation (4.3) is, in discrete form, a nine-point stencil approximating
the Laplacian of OE and the right hand side, for is a standard four-point
divergence stencil. Without embedded boundaries, the method reduces to constant coefficient
difference stencils for divergence, gradient, and the Laplacian operator. These stencils
are, for
(DV
and, letting
(DGOE)
Body cells contribute nothing to the integrals in (4.3); in mixed cells the integrals are
computed only over the portion of each cell that lies in the fluid. This calculation can
be optimized by pre-computing the following integrals in each mixed cell; these integrands
result from the products of the gradients of the basis functions in (4.3). We define
I
Z
I
Z
I
Z
"\Omega
are functions defined on each cell such that x 0 at the center of
\Sigma1=2 at the left and right edges of B i;j ; and y \Sigma1=2 at the top and bottom
edges of B
Linear combinations of these integrals form the coefficients used in the divergence, gradient
and Laplacian operators. For example, the divergence of a barred vector becomes
(DV
I a
I a
I a
I a
)=\Deltay
If we let (GOE) full be (GOE) as it would be defined if the cell were a full cell, then in a
mixed cell the average of the gradient over the fluid portion of the cell is
full
I b
full
I a
where, for
The full Laplacian operator requires the third integral I c because of the quadratic terms
which results from the inner products of OE i+ 1;j+ 1and OE k+1=2;'+1=2 ; we leave the derivation
to the reader. Note that for a full cell and so the stencils above for
mixed cells reduce to those for full cells.
Note that solving (4.3) defines OE i+ 1;j+ 1at each node (i
which the support
of
intersects the fluid region. Thus, the pressure is defined even at nodes
contained in the body region, as long as they are within one cell of the fluid region.
5 Numerical Results
In this section we present results of calculations done using the Cartesian grid representation
of bodies and/or boundaries of the domain. The first two sets of results are convergence
studies; the first tests the accuracy of the projection alone, the second tests the accuracy of
the full method for the Euler equations. In both cases, the results are presented in the form
of tables which show the norms of the errors, as well as the calculated rates of convergence.
The error for a calculation on a grid of spacing h is defined as the difference between the
solution on that grid and the averaged solution from the same calculation on a grid of
spacing h=2: In the first convergence study, the column 128-256 refers to the errors of the
solution on the 128 2 grid as calculated by comparing the solution on the 128 2 grid to the
solution on the 256 2 grid; similarly for 256-512. Once the errors are computed pointwise for
each calculation, the L 1 , L 2 , and L1 norms of the errors are calculated. The convergence
rates of the method can be estimated by taking the log 2 of the ratio of these norms. This
provides a heuristic estimate of the convergence rate assuming that the method is operating
in its asymptotic range. We present two separate measures of the error. In the first we
compute the norms over the entire domain ("All cells"). In the second we examine the
error in an interior subdomain in order to measure errors away from the boundary. For
the subdomain we have selected the region covered by full cells in the 128 2 grid ("Full 128
cells").
The first convergence study tests the accuracy of the projection alone. The domain is
the unit square, and three bodies are placed in the interior. A circle is centered at (:75; :75)
and has radius .1; ellipses are centered at (:25; :625) and (:5; :25) and have axes (:15; :1) and
respectively. The boundary conditions are inflow at
and no-flow boundaries at
In this study, the initial data inside the numerical domain is (u; this data is
then projected to define a velocity field which is approximately divergence-free in the region
of the domain not covered by the bodies. The pressure gradient which is used to correct
the initial data represents the deviation of the potential flow solution from uniform flow.
In
Tables
1 and 2 we present the results of this convergence study. The almost-second-
order convergence in the L 1 norm, first-order convergence in the L1 norm, and approximately
h 1:5 convergence in the L 2 norm, are what we would expect of a solution which is
second-order in most of the domain and first-order at boundaries. This is consistent with
the observation that the maximum absolute error on the mixed cells is an order of magnitude
higher than the maximum error on the full cells for the finer grids. Figure 1 shows a
contour plot of the magnitude of the error of the 256 2 calculation.
The second convergence study is of flow through a diverging channel. In this case we
evaluate the velocity field at time in order to demonstrate the order of the complete
algorithm for flow that is smooth in and near the mixed cells. The problem domain is 4 x
1, and the fluid is restricted to flow between the curves y top and y bot ; defined as
3:
3: - x - 4:
and
with The calculation is run at CFL number 0.9, and the flow is
initialized with the potential flow field corresponding to inflow velocity 1: at the left
edge, as computed by the initial projection. Tables 3 and 4 show the errors and convergence
rates for the velocity components; here 128-256 refers to the errors of the solution on the
128x32 grid as calculated by comparing the solution on the 128x32 grid to the solution
on the 256x64 grid; similarly for 256-512. As before, we compute the errors both on the
full domain and on a subdomain defined as the region covered by full cells on the 128x32
grid. (In this case, the norms are scaled, as appropriate, by the area of the domain.) Again
we see rates corresponding to global second-order accuracy but first-order accuracy near
the boundaries. Figures 2a-b are contour plots of the log 10 of the error in the velocity
components. The error is clearly concentrated along the fluid-body boundaries; for the sake
of the figures we have defined all errors less than 1% of the maximum error in each (including
the values in the body cells) to be equal to 1% of the maximum error. There are ten contour
intervals in each figure spanning these two orders of magnitude. In time the error advects
further along but does not contaminate the interior flow. We note
that for this particular flow the velocity field near the boundary is essentially parallel to the
boundary, and in a more general case we might expect to see more contamination of the
interior flow. Note, however, that the maximum error is less than :3% of the magnitude of
the velocity.
The third example we present is that of flow past a half-cylinder. There is an extensive
experimental and computational literature on the subject of flow past a cylinder in an
infinite domain at low to moderate Reynolds number; see, for example [33] and [61] for recent
experimental and computational results, respectively. Since the methodology presented here
is for inviscid flow, we compute flow past a half-cylinder rather than a full cylinder so as to
force the separation point to occur at the trailing edge. However, we present this calculation
to demonstrate the type of application for which the Cartesian grid methodology would be
most useful: a calculation in which one might be interested most in the flow features away
from the body (i.e. the shed vortices downstream from the cylinder), but which requires
the presence of the body in order to generate or modify those features.
The resolution of our calculation is 256 x 64, the domain is 4 x 1, and the diameter of the
half-cylinder is .25. The inflow velocity at the left edge is the boundary conditions are
outflow at the right edge, no-flow boundaries at the top and bottom. The initial conditions
are the potential flow with uniform inflow as calculated from the initial projection, combined
with a small vortical perturbation to break the symmetry of the problem. Shown here are
"snapshots" of the vorticity (Figure 3a) and a passively advected scalar (Figure 3b) at late
enough time that the flow is periodic, and that the perturbation has been advected through
the domain. The scalar was advected in from the center of the inflow edge. The Strouhal
number is calculated to be approximately D=(U1T
is the observed period of vortex shedding, is the cylinder diameter, and
is the free-stream (i.e. inflow) velocity. One would expect a value between .2 and .4 (see
[33] and the references cited there), so this seems a reasonable value given the limitations
of the comparison.
All cells Full 128 cells
Table
1: Errors and convergence rates for the x-component of the pressure gradient.
All cells Full 128 cells
Table
2: Errors and convergence rates for the y-component of the pressure gradient.
All cells Full 128 cells
L1 2.36e-3 .80 1.35e-3 1.28e-3 1.08 6.04e-4
Table
3: Errors and convergence rates for the x-component of the velocity.
All cells Full 128 cells
Table
4: Errors and convergence rates for the y-component of the velocity.
6 Conclusion
We have presented a method for calculation of time-dependent incompressible inviscid flow
in a domain with embedded boundaries. This approach combines the basic projection
method, using an approximate projection, with the Cartesian grid representation of ge-
ometry. In this approach, the body is represented as an interface embedded in a regular
Cartesian mesh. The adaptation of the higher-order upwind method to include geometry
is modeled on the Cartesian grid method for compressible flow. The discretization of the
body near the flow uses a volume-of-fluid representation with a redistribution procedure.
The approximate projection incorporates knowledge of the body through volume and area
fractions, and certain integrals over the mixed cells. Convergence results indicate that the
method is first-order at and near the body, and globally second-order away from the body.
The method is demonstrated on flow past a half-cylinder with vortex shedding, and is shown
to give a reasonable Strouhal number.
The method here is presented in two dimensions; the extension to r \Gamma z and three
dimensions and to variable density flows, and the inclusion of this representation with
an adaptive mesh refinement algorithm for incompressible flow are being developed. The
techniques developed here are also being modified for use in more general low Mach number
models. In particular, we are using this methodology to model low Mach number combustion
in realistic industrial burner geometries and to represent terrain in an anelastic model of
the atmosphere.
Acknowledgements
We would like to thank Rick Pember for the many conversations on the subtleties of the
Cartesian grid method.
--R
Adaptation and surface modeling for cartesian mesh methods
Techniques in multiblock domain decomposition and surface grid generation
A numerical method for the incompressible Navier-Stokes equations based on an approximate projection
A simulation technique for 2-d unsteady inviscid flows around arbitrarily moving and deforming bodies of arbitrary geometry
A second-order projection method for the incompressible Navier-Stokes equations
An efficient second-order projection method for viscous incompressible flow
Adaptive mesh refinement on moving quadrilateral grids
Conservative front-tracking for inviscid compressible flow
A projection method for viscous incompressible flow on quadrilateral grids
A flexible grid embedding technique with application to the Euler equations
Automatic adaptive refinement for the Euler equations
An adaptive Cartesian mesh algorithm for the Euler equations in arbit rary geometries
Progress in finite-volume calculations for wind- fuselage combinations
A conservative front tracking method for hyperbolic conservation laws
A conservative front tracking method for hyperbolic conservations laws
Composite overlapping meshes for the solution of partial differential equations
Simulation of unsteady inviscid flow on an adaptively refined Cartesian grid
Numerical solution of the navier-stokes equations
Euler calculations for multielement airfoils using Cartesian grids
An accuracy assessment of Cartesian-mesh approaches for the Euler equ ations
A direct Eulerian MUSCL scheme for gas dynamics
Cartesian Euler methods for arbitrary aircraft configurations
An adaptive multigrid applied to supersonic blunt body flow
Euler calculations for wings using Cartesian grids.
An adaptive multifluid interface-capturing method for compressible flow in complex geometries
An experimental study of the parallel and oblique vortex shedding from circular cylinders
A fourth-order accurate method for the incompressible Navier-Stokes equations on overlapping grids
Inviscid and viscous solutions for airfoil/cascade flows using a locally implicit algorithm on adaptive meshes
Transonic potential flow calculations using conservative form
Improvements to the aircraft Euler method.
Calculation of inviscid transonic flow over a complete aircraft.
A large time step generalization of Godunov's method for systems of conservation laws
Multidimensional numerical simulation of a pulse combustor
A hybrid structured-unstructured grid method for unsteady turbomachinery flow compuations
A solution-adaptive hybrid-grid method for the unsteady analysis of turbomachinery
Accurate multigrid solution of the Euler equations on unstructured and adaptive meshes
3d applications of a cartesian grid euler method
A finite difference solution of the Euler equations on non-body fitted grids
Cel: A time-dependent
Prediction of critical Mach number for store config- urations
A Cartesian grid approach with hierarchical refinement for compressible flows
A fractional step solution method for the unsteady incompressible Navier-Stokes equations in generalized coordinate systems
A zonal implicit procedure for hybrid structured-unstructured grids
Computations of unsteady viscous compressible flows using adaptive mesh refinement in curvilinear body-fitted grid systems
A general decomposition algorithm applied to multi-element airfoil grids
A second-order projection method for the incompressible Navier-Stokes equations in arbitrary domains
Boundary fitted coordinate systems for numerical solution of partial differential equations - A review
A three-dimensional struc- tured/unstructured hybrid Navier-Stokes method for turbine blade rows
Mixed structured-unstructured meshes for aerodynamic flow sim- ulation
A method for solving the transonic full-potential equations for general configurations
A locally refined rectangular grid finite element method: Application to computational fluid dynamics and computational physics
fractional step method for time-dependent incompressible Navier-Stokes equations in curvilinear coor- dinates
An adaptively refined Cartesian mesh solver for the Euler equations
--TR
--CTR
Caroline Gatti-Bono , Phillip Colella, An anelastic allspeed projection method for gravitationally stratified flows, Journal of Computational Physics, v.216 August 2006
Yu-Heng Tseng , Joel H. Ferziger, A ghost-cell immersed boundary method for flow in complex geometry, Journal of Computational Physics, v.192 December
Jiun-Der Yu , Shinri Sakai , James Sethian, A coupled quadrilateral grid level set projection method applied to ink jet simulation, Journal of Computational Physics, v.206 n.1, p.227-251, 10 June 2005
Xiaolin Zhong, A new high-order immersed interface method for solving elliptic equations with imbedded interface of discontinuity, Journal of Computational Physics, v.225 n.1, p.1066-1099, July, 2007
R. Ghias , R. Mittal , H. Dong, A sharp interface immersed boundary method for compressible viscous flows, Journal of Computational Physics, v.225 n.1, p.528-553, July, 2007
Stphane Popinet, Gerris: a tree-based adaptive solver for the incompressible Euler equations in complex geometries, Journal of Computational Physics, v.190 n.2, p.572-600, 20 September
Anvar Gilmanov , Fotis Sotiropoulos, A hybrid Cartesian/immersed boundary method for simulating flows with 3D, geometrically complex, moving bodies, Journal of Computational Physics, v.207 August 2005
M. P. Kirkpatrick , S. W. Armfield , J. H. Kent, A representation of curved boundaries for the solution of the Navier-Stokes equations on a staggered three-dimensional Cartesian grid, Journal of Computational Physics, v.184 n.1, p.1-36,
S. Marella , S. Krishnan , H. Liu , H. S. Udaykumar, Sharp interface Cartesian grid method I: an easily implemented technique for 3D moving boundary computations, Journal of Computational Physics, v.210 n.1, p.1-31, 20 November 2005 | cartesian grid;incompressible Euler equations;projection method |
271621 | The Spectral Decomposition of Nonsymmetric Matrices on Distributed Memory Parallel Computers. | The implementation and performance of a class of divide-and-conquer algorithms for computing the spectral decomposition of nonsymmetric matrices on distributed memory parallel computers are studied in this paper. After presenting a general framework, we focus on a spectral divide-and-conquer (SDC) algorithm with Newton iteration. Although the algorithm requires several times as many floating point operations as the best serial QR algorithm, it can be simply constructed from a small set of highly parallelizable matrix building blocks within Level 3 basic linear algebra subroutines (BLAS). Efficient implementations of these building blocks are available on a wide range of machines. In some ill-conditioned cases, the algorithm may lose numerical stability, but this can easily be detected and compensated for.The algorithm reached 31% efficiency with respect to the underlying PUMMA matrix multiplication and 82% efficiency with respect to the underlying ScaLAPACK matrix inversion on a 256 processor Intel Touchstone Delta system, and 41% efficiency with respect to the matrix multiplication in CMSSL on a Thinking Machines CM-5 with vector units. Our performance model predicts the performance reasonably accurately.To take advantage of the geometric nature of SDC algorithms, we have designed a graphical user interface to let the user choose the spectral decomposition according to specified regions in the complex plane. | Introduction
A standard technique in parallel computing is to build new algorithms from existing high
performance building blocks. For example, the LAPACK linear algebra library [1] is writ-
Department of Mathematics, University of Kentucky, Lexington, KY 40506.
y Computer Science Division and Mathematics Department, University of California, Berkeley, CA 94720.
z Department of Computer Science, University of Tennessee, Knoxville, TN 37996 and Mathematical
Sciences Section, Oak Ridge National Laboratory, Oak Ridge, TN 37831.
x Department of Computer Science, University of Tennessee, Knoxville, TN 37996.
- Department of Mathematics, University of California, Berkeley, CA 94720.
Computer Science Division, University of California, Berkeley, CA 94720.
ten in terms of the Basic Linear Algebra Subroutines (BLAS)[38, 23, 22], for which efficient
implementations are available on many workstations, vector processors, and shared memory
parallel machines. The recently released ScaLAPACK 1.0(beta) linear algebra library [26]
is written in terms of the Parallel Block BLAS (PB-BLAS) [15], Basic Linear Algebra Communication
Subroutines (BLACS) [25], BLAS and LAPACK. ScaLAPACK includes routines
for LU, QR and Cholesky factorizations, and matrix inversion, and has been ported to the
Intel Gamma, Delta and Paragon, Thinking Machines CM-5, and PVM clusters. The Connection
Machine Scientific Software Library (CMSSL)[54] provides analogous functionality
and high performance for the CM-5.
In this work, we use these high performance kernels to build two new algorithms for
finding eigenvalues and invariant subspaces of nonsymmetric matrices on distributed memory
parallel computers. These algorithms perform spectral divide and conquer, i.e. they
recursively divide the matrix into smaller submatrices, each of which has a subset of the
original eigenvalues as its own. One algorithm uses the matrix sign function evaluated with
Newton iteration [8, 42, 6, 4]. The other algorithm avoids the matrix inverse required by
Newton iteration, and so is called the inverse free algorithm [30, 10, 44, 7]. Both algorithms
are simply constructed from a small set of highly parallelizable building blocks, including
matrix multiplication, QR decomposition and matrix inversion, as we describe in section 2.
By using existing high performance kernels in ScaLAPACK and CMSSL, we have
achieved high efficiency. On a 256 processor Intel Touchstone Delta system, the sign function
algorithm reached 31% efficiency with respect to the underlying matrix multiplication
(PUMMA [16]) for 4000-by-4000 matrices, and 82% efficiency with respect to the underlying
ScaLAPACK 1.0 matrix inversion. On a Thinking Machines CM-5 with
vector units, the hybrid Newton-Schultz sign function algorithm obtained 41% efficiency
with respect to matrix multiplication from CMSSL 3.2 for 2048-by-2048 matrices.
The nonsymmetric spectral decomposition problem has until recently resisted attempts
at parallelization. The conventional method is to use the Hessenberg QR algorithm. One
first reduces the matrix to Schur form, and then swaps the desired eigenvalues along the
diagonal to group them together in order to form the desired invariant subspace [1]. The
algorithm had appeared to required fine grain parallelism and be difficult to parallelize
[5, 27, 57], but recently Henry and van de Geijn[32] have shown that the Hessenburg QR
algorithm phase can be effectively parallelized for distributed memory parallel computers
with up to 100 processors. Although parallel QR does not appear to be as scalable as
the algorithms presented in this paper, it may be faster on a wide range of distributed
memory parallel computers. Our algorithms perform several times as many floating point
operations as QR, but they are nearly all within Level 3 BLAS, whereas implementations
of QR performing the fewest floating point operations use less efficient Level 1 and 2 BLAS.
A thorough comparison of these algorithms will be the subject of a future paper.
Other parallel eigenproblem algorithms which have been developed include earlier par-
allelizations of the QR algorithm [29, 50, 56, 55], Hessenberg divide and conquer algorithm
using either Newton's method [24] or homotopies [17, 39, 40], and Jacobi's method
[28, 47, 48, 49]. All these methods suffer from the use of fine-grain parallelism, instability,
slow or misconvergence in the presence of clustered eigenvalues of the original problem or
some constructed subproblems [20], or all three.
The methods in this paper may be less stable than QR algorithm, and may fail to
converge in a number of circumstances. Fortunately, it is easy to detect and compensate
for this loss of stability, by choosing to divide the spectrum in a slightly different location.
Compared with the other approaches mentioned above, we believe the algorithms discussed
in this paper offer an effective tradeoff between parallelizability and stability.
The other algorithms most closely related to the approaches used here may be found
in [3, 9, 36], where symmetric matrices, or more generally matrices with real spectra, are
treated.
Another advantage of the algorithms described in this paper is that they can compute
just those eigenvalues (and the corresponding invariant subspace) in a user-specified region
of the complex plane. To help the user specify this region, we will describe a graphical user
interface for the algorithms.
The rest of this paper is organized as follows. In section 2, we present our two algorithms
for spectral divide and conquer in a single framework, show how to divide the spectrum
along arbitrary circles and lines in the complex plane, and discuss implementation details.
In section 3, we discuss the performance of our algorithms on the Intel Delta and CM-5. In
section 4, we present a model for the performance of our algorithms, and demonstrate that
it can predict the execution time reasonably accurately. Section 5 describes the design of
an X-window user interface. Section 6 draws conclusions and outlines our future work.
Parallel Spectral Divide and Conquer Algorithms
Both spectral divide and conquer (SDC) algorithms discussed in this paper can be presented
in the following framework. Let
be the Jordan canonical form of A, where the eigenvalues of the l \Theta l submatrix J+ are the
eigenvalues of A inside a selected region D in the complex plane, and the eigenvalues of the
are the eigenvalues of A outside D. We assume that there
are no eigenvalues of A on the boundary of D, otherwise we reselect or move the region D
slightly. The invariant subspace of the matrix A corresponding to the eigenvalues inside D
are spanned by the first l columns of X . The matrix
I 0
is the corresponding spectral projector. Let QR\Pi be the rank revealing QR decomposition
of the matrix P+ , where Q is unitary, R is upper triangular, and \Pi is a permutation
matrix chosen so that the leading l columns of Q span the range space of P+ . Then Q yields
the desired spectral decomposition:
A 11 A 12
where the eigenvalues of A 11 are the eigenvalues of A inside D, and the eigenvalues of A 22
are the eigenvalues of A outside D. By substituting the complementary projector I \Gamma P+
for P+ in (2.2), A 11 will have the eigenvalues outside D and A 22 will have the eigenvalues
inside D.
The crux of a parallel SDC algorithm is to efficiently compute the desired spectral
projector P+ without computing the Jordan canonical form.
2.1 The SDC algorithm with Newton iteration
The first SDC algorithm uses the matrix sign function, which was introduced by Roberts
[46] for solving the algebraic Riccati equation. However, it was soon extended to solving
the spectral decomposition problem [8]. More recent studies may be found in [11, 42, 6].
The matrix sign function, sign(A), of a matrix A with no eigenvalues on the imaginary
axis can be defined via the Jordan canonical form of A (2.1), where the eigenvalues of J+
are in the open right half plane D, and the eigenvalues of J \Gamma are in the open left half plane
D. Then sign(A) is
I 0
It is easy to see that the matrix
is the spectral projector onto the invariant subspace corresponding to the eigenvalues of A
in D. l is the number of the eigenvalues of A in D. I \Gamma is the spectral projector corresponding to the eigenvalues of A in -
D.
Now let QR\Pi be the rank revealing QR decomposition of the projector P+ . Then
Q yields the desired spectral decomposition (2.3), where the eigenvalues of A 11 are the
eigenvalues of A in D, and the eigenvalues of A 22 are the eigenvalues of A in -
D.
Since the matrix sign function, sign(A), satisfies the matrix equation
we can use Newton's method to solve this matrix equation and obtain the following simple
iteration:
The iteration is globally and ultimately quadratically convergent with lim j!1 A
provided A has no pure imaginary eigenvalues [46, 35]. The iteration fails otherwise, and
in finite precision, the iteration could converge slowly or not at all if A is "close" to having
pure imaginary eigenvalues.
There are many ways to improve the accuracy and convergence rate of this basic iteration
[12, 33, 37]. For example, if may use the so called Newton-Schulz iteration
to avoid the use of the matrix inverse. Although it requires twice as many flops, it is more
efficient whenever matrix multiply is at least twice as efficient as matrix inversion. The
Newton-Schulz iteration is also quadratically convergent provided that
hybrid iteration might begin with Newton iteration until kA 2
then switch to
Newton-Schulz iteration (we discuss the performance of one such hybrid later).
Hence, we have the following algorithm which divides the spectrum along the pure
imaginary axis.
Algorithm 1 (The SDC Algorithm with Newton Iteration)
Let A
For convergence or j ? j max do
if
End
Compute
Compute
A 11 A 12
Compute
Here - is the stopping criterion for the Newton iteration (say, " is the
machine precision), and j max limits the maximum number of iterations (say j
return, the generally nonzero quantity measures the backward stability of the
computed decomposition, since by setting E 21 to zero and so decoupling the problem into
A 11 and A 22 , a backward error of
For simplicity, we just use the QR decomposition with column pivoting to reveal rank,
although more sophisticated rank-revealing schemes exist [14, 31, 34, 51].
All the variations of the Newton iteration with global convergence still need to compute
the inverse of a matrix explicitly in one form or another. Dealing with ill-conditioned
matrices and instability in the Newton iteration for computing the matrix sign function
and the subsequent spectral decomposition is discussed in [11, 6, 4] and the references
therein.
2.2 The SDC algorithm with inverse free iteration
The above algorithm needs an explicit matrix inverse. This could cause numerical instability
when the matrix is ill-conditioned. The following algorithm, originally due to Godunov,
Bulgakov and Malyshev [30, 10, 44] and modified by Bai, Demmel and Gu [7], eliminates
the need for the matrix inverse, and divides the spectrum along the unit circle instead of
the imaginary axis. We first describe the algorithm, and then briefly explain why it works.
Algorithm 2 (The SDC Algorithm with Inverse Free Iteration)
Let A
For convergence or j ? j max do
!/
R j!
End
Compute
Compute
A 11 A 12
Compute
As in Algorithm 1, we need to choose a stopping criterion - in the inner loop, as well as
a limit j max on the maximum number of iterations. On convergence, the eigenvalues of
A 11 are the eigenvalues of A inside the unit disk D, and the eigenvalues of A 22 are the
eigenvalues of A outside D. It is assumed that no eigenvalues of A are on the unit circle.
As with Algorithm 1, the quantity measures the backward stability.
To illustrate how the algorithm works we will assume that all matrices we want to invert
are invertible. From the inner loop of the algorithm, we see that
22 A j
R j!
so
22 A j or B j A \Gamma1
22 . Therefore
A
so the algorithm is simply repeatedly squaring the eigenvalues, driving the ones inside the
unit circle to 0 and those outside to 1. Repeated squaring yields quadratic convergence.
This is analogous to the sign function iteration where computing (A+A \Gamma1 )=2 is equivalent
to taking the Cayley transform and taking the inverse
Cayley transform. Further explanation of how the algorithm works can be found in [7].
An attraction of this algorithm is that it can equally well deal with the generalized
nonsymmetric eigenproblem A \Gamma -B, provided the problem is regular, i.e.
not identically zero. One simply has to start the algorithm with B
Regarding the QR decomposition in the inner loop, there is no need to form the entire
2n \Theta 2n unitary matrix Q in order to get the submatrices Q 12 and Q 22 . Instead, we can
compute the QR decomposition of the 2n \Theta n matrix (B H
implicitly as Householder vectors in the lower triangular part of the matrix and another n
dimensional array. We can then apply Q - without computing it - to the 2n \Theta n matrix
(0; I) T to obtain the desired matrices Q 12 and Q 22 .
We now show how to compute Q in the rank revealing QR decomposition of
computing the explicit inverse subsequent products. This
will yield the ultimate inverse free algorithm. Recall that for our purposes, we only need the
unitary factor Q and the rank of It turns out that by using the generalized
QR decomposition technique developed in [45, 2], we can get the desired information without
computing In fact, in order to compute the QR decomposition with pivoting
of first compute the QR decomposition with pivoting of the matrix A
and then we compute the RQ factorization of the matrix Q H
From (2.7) and (2.8), we have (R
The Q is the desired unitary
factor. The rank of R 1 is also the rank of the matrix
2.3 Spectral Transformation Techniques
Although Algorithms 1 and 2 only divide the spectrum along the pure imaginary axis and
the unit circle, respectively, we can use M-obius and other simple transformations of the
input matrix A to divide along other more general curves. As a result, we can compute
the eigenvalues (and corresponding invariant subspace) inside any region defined as the
intersection of regions defined by these curves. This is a major attraction of this kind of
algorithm.
Let us show how to use M-obius transformations to divide the spectrum along arbitrary
lines and circles. Transform the eigenproblem Az = -z to
Then if we apply Algorithm 1 to A fiI) we can split the spectrum
with respect to a region
0:
If we apply Algorithm 2 to I), we can split along the curve
For example, by computing the matrix sign function of
then Algorithm 1 will split the spectrum of A along a circle centered at - with radius r. If
A is real, and we choose - to be real, then all arithmetic will be real.
If A will split the spectrum of A along a circle
centered at - with radius r. If A is real, and we choose - to be real, then all arithmetic in
the algorithm will be real.
Y
a
O X
Figure
1: Different Geometric Regions for the Spectral Decomposition
Other more general regions can be obtained by taking A 0 as a polynomial function of
A. For example, by computing the matrix sign function of , we can divide the
spectrum within a "bowtie" shaped region centered at ff. Figure 1 illustrates the regions
which the algorithms can deal with assuming that A is real and the algorithms use only
real arithmetic.
2.4 Tradeoffs
Algorithm 1 computes an explicit inverse, which could cause numerical instability if the
matrix is ill-conditioned. The provides an alternative approach for
achieving better numerical stability. There are some very difficult problems where Algorithm
2 gives a more accurate answer than Algorithm 1. Numerical examples can be found in [7].
However, neither algorithm avoids all accuracy and convergence difficulties associated with
eigenvalues very close to the boundary of the selected region.
The stability advantage of the inverse free approach is obtained at the cost of more
storage and arithmetic. Algorithm 2 needs 4n 2 more storage space than Algorithm 1. This
will certainly limit the problem size we will be able to solve. Furthermore, one step of the
Algorithm 2 does about 6 to 7 times more arithmetic than the one step of Algorithm 1.
QR decomposition, the major component of Algorithm 2, and matrix inversion, the main
component of Algorithm 1, require comparable amounts of communication per flop. (See
table 4 for details.) Therefore, Algorithm 2 can be expected to run significantly slower than
Algorithm 1.
Algorithm 1 is faster but somewhat less stable than Algorithm 2, and since testing
stability is easy (compute use the following 3 step algorithm:
1. Try to use Algorithm 1 to split the spectrum. If it succeeds, stop.
2. Otherwise, try to split the spectrum using Algorithm 2. If it succeeds, stop.
3. Otherwise, use the QR algorithm.
This 3-step approach works by trying the fastest but least stable method first, falling back
to slower but more stable methods only if necessary. The same paradigm is also used in
other parallel algorithms [19].
If a fast parallel version of the QR algorithm[32] becomes available, it would probably
be faster than the inverse free algorithm and hence would obviate the need for the second
step listed above. Algorithm 2 would still be of interest if only a subset of the spectrum is
desired (the QR algorithm necessarily computes the entire spectrum), or for the generalized
eigenproblem of a matrix pencil A \Gamma -B.
3 Implementation and Performance
We started with a Fortran 77 implementation of Algorithm 1. This code is built using the
BLAS and LAPACK for the basic matrix operations, such as LU decomposition, triangular
inversion, QR decomposition and so on. Initially, we tested our software on SUN and
IBM RS6000 workstations, and then the CRAY. Some preliminary performance data of
the matrix sign function based algorithm have been reported in [6]. In this report, we will
focus on the implementation and performance evaluation of the algorithms on distributed
memory parallel machines, namely the Intel Delta and the CM-5.
We have implemented Algorithm 1, and collected a large set of data for the performance
of the primitive matrix operation subroutines on our target machines. More performance
evaluation and comparison of these two algorithms and their applications are in progress.
3.1 Implementation and Performance on Intel Touchstone
The Intel Touchstone Delta computer system is 16 \Theta 32 mesh of i860 processors with a
wormhole routing interconnection network [41], located at the California Institute of Technology
on behalf of the Concurrent Supercomputing Consortium. The Delta's communication
characteristics are described in [43].
In order to implement Algorithm 1, it was natural to rely on the ScaLAPACK 1.0
library (beta version) [26]. This choice requires us to exploit two key design features of this
package. First, the ScaLAPACK library relies on the Parallel Block BLAS (PB-BLAS)[15],
which hides much of the interprocessor communication. This hiding of communication
makes it possible to express most algorithms using only the PB-BLAS, thus avoiding explicit
calls to communication routines. The PB-BLAS are implemented on top of calls to the
BLAS and to the Basic Linear Algebra Communication Subroutines (BLACS)[25]. Second,
ScaLAPACK assumes that the data is distributed according to the square block cyclic
decomposition scheme, which allows the routines to achieve well balanced computations
and to minimize communication costs. ScaLAPACK includes subroutines for LU, QR and
Cholesky factorizations, which we use as building blocks for our implementation. The
PUMMA routines [16] provide the required matrix multiplication.
matrix order
time
ScaLAPACK on 256 PEs Intel Touchstone Delta (timing in second)
GEMM
INV
QRP
1000 2000 3000 4000 5000 6000 7000 800051525matrix order
mflops
per
node
ScaLAPACK on 256 PEs Intel Touchstone Delta (Mflops per Node)
GEMM
INV
QRP
Figure
2: Performance of ScaLAPACK 1.0 (beta version) subroutines on 256 (16 \Theta 16) PEs
Intel Touchstone Delta system.
The matrix inversion is done in two steps. After the LU factorization has been computed,
the upper triangular U matrix is inverted, and A \Gamma1 is obtained by substitution with L. Using
blocked operations leads to performance comparable to that obtained for LU factorization.
The implementation of the QR factorization with or without column pivoting is based
on the parallel algorithm presented by Coleman and Plassmann [18]. The QR factorization
with column pivoting has a much larger sequential component, processing one column at
a time, and needs to update the norms of the column vectors at each step. This makes
using blocked operations impossible and induces high synchronization overheads. However,
as we will see, the cost of this step remains negligible in comparison with the time spent in
the Newton iteration. Unlike QR factorization with pivoting, the QR factorization without
pivoting and the post- and pre-multiplication by an orthogonal matrix do use blocked
operations.
Figure
2 plots the timing results obtained by the PUMMA package using the BLACS
for the general matrix multiplication, and ScaLAPACK 1.0 (beta version) subroutines for
the matrix inversion, QR decomposition with and without column pivoting. Corresponding
tabular data can be found in the Appendix.
To measure the efficiency of Algorithm 1, we generated random matrices of different
sizes, all of whose entries are normally distributed with mean 0 and variance 1. All computations
were performed in real double precision arithmetic. Table 1 lists the measured
results of the backward error, the number of Newton iterations, the total CPU time and the
megaflops rate. In particular, the second column of the table contains the backward errors
and the number of the Newton iterations in parentheses. We note that the convergence
rate is problem-data dependent. From Table 1, we see that for a 4000-by-4000 matrix, the
algorithm reached 7.19/23.12=31% efficiency with respect to PUMMA matrix multiplica-
tion, and 7.19/8.70=82% efficiency with respect to the underlying ScaLAPACK 1.0 (beta)
matrix inversion subroutine. As our performance model shows, and tables 9, 10, 11, 12,
and 14 confirm, efficiency will continue to improve as the matrix size n increases. Our
Table
1: Backward accuracy, timing in seconds and megaflops of Algorithm 1 on a 256 node
Intel Touchstone Delta system.
Timing Mflops Mflops GEMM-Mflops INV-Mflops
(iter) (seconds) (total) (per node) (per node) (per node)
1000 2000 3000 4000 5000 6000 7000 8000 9000135matrix size
Gflops
The Newton Iteration based Algorithm on Intel Delta System (Gflops)
Figure
3: Performance of Algorithm 1 on the Intel Delta system as a function of matrix size
for different numbers of processors.
performance model is explained in section 4. Figure 3 shows the performance of Algorithm
1 on the Intel Delta system as a function of matrix size for different numbers of processors.
Table
gives details of the total CPU timing of the Newton iteration based algorithm,
summarized in Table 1). It is clear that the Newton iteration (sign function) is most
expensive, and takes about 90% of the total running time.
To compare with the standard sequential algorithm, we also ran the LAPACK driver
routine DGEES for computing the Schur decomposition (with reordering of eigenvalues) on
one i860 processor. It took 592 seconds for a matrix of order 600, or 9.1 megaflops/second.
Assuming that the time scales like n 3 , we can predict that for a matrix of order 4000, if the
matrix were able to fit on a single node, then DGEES would take 175,000 seconds (48 hours)
to compute the desired spectral decomposition. In contrast, Algorithm 1 would only take
1,436 seconds (24 minutes). This is about 120 times faster! However, we should note that
DGEES actually computes a complete Schur decomposition with the necessary reordering
of the spectrum. Algorithm 1 only decomposes the spectrum along the pure imaginary axis.
In some applications, this may be what the users want. If the decomposition along a finer
region or a complete Schur decomposition is desired, then the cost of the Newton iteration
based algorithms will be increased, though it is likely that the first step just described will
Table
2: Performance Profile on a 256 processor Intel Touchstone Delta system (time in
seconds)
1000 123.06(91%) 6.87(5%) 4.27(5%) 134.22
2000 413.95(92%) 18.60(4%) 16.13(4%) 448.69
3000 717.04(90%) 36.76(5%) 38.37(5%) 792.18
take most of the time [13].
3.2 Implementation and Performance on the CM-5
The Thinking Machines CM-5 was introduced in 1991. The tests in this section were run on
a processor CM-5 at the University of California at Berkeley. Each CM-5 node contains a
with an FPU and 64 KB cache, four vector floating points units, and
of memory. The front end is a 33 HMz Sparc with 32 MB of memory. With the vector
units, the peak 64-bit floating point performance is 128 megaflops per node (32 megaflops
per vector unit). See [53] for more details.
Algorithm 1 was implemented in CM Fortran (CMF) version 2.1 - an implementation of
Fortran 77 supplemented with array-processing extensions from the ANSI and ISO (draft)
standard Fortran 90 [53]. CMF arrays come in two flavors. They can be distributed across
CM processor memory (in some user defined layout) or allocated in normal column major
fashion on the front end alone. When the front end computer executes a CM Fortran pro-
gram, it performs serial operations on scalar data stored in its own memory, but sends any
instructions for array operations to the CM. On receiving an instruction, each node executes
it on its own data. When necessary, CM processors can access each other's memory by any
of three communication mechanisms, but these are transparent to the CMF programmer
[52].
We also used CMSSL version 3.2, [54], TMC's library of numerical linear algebra rou-
tines. CMSSL provides data parallel implementations of many standard linear algebra
routines, and is designed to be used with CMF and to exploit the vector units.
CMSSL's QR factorization (available with or without pivoting) uses standard Householder
transformations. Column blocking can be performed at the user's discretion to
improve load balance and increase parallelism. Scaling is available to avoid situations when
a column norm is close to
underflow or
overflow, but this is an expensive "insurance
policy". Scaling is not used in our current CM-5 code, but should perhaps be made available
in our toolbox for the informed user. The QR with pivoting (QRP) factorization routine,
which we shall use to reveal rank, is about half as fast as QR without pivoting. This is
due in part to the elimination of blocking techniques when pivoting, as columns must be
processed sequentially.
Gaussian elimination with or without partial pivoting is available to compute LU factorizations
and perform back substitution to solve a system of equations. Matrix inversion is
matrix order
time
CMSSL 3.2 on 32 PEs with VUs CM-5 (timing in second)
GEMM
INV
QRP
matrix order
mflops
per
node
CMSSL 3.2 on 32 PEs with VUs CM-5 (Mflops per node)
GEMM
INV
QRP
Figure
4: Performance of some CMSSL 3.2 subroutines on 32 PEs with VUs CM-5
performed by solving the system I . The LU factors can be obtained separately - to
support Balzer's and Byers' scaling schemes to accelerate the convergence of Newton, and
which require a determinant computation - and there is a routine for estimating kA
from the LU factors to detect ill-conditioning of intermediate matrices in the Newton iter-
ation. Both the factorization and inversion routines balance load by permuting the matrix,
and blocking (as specified by the user) is used to improve performance.
The LU, QR and Matrix multiplication routines all have "out-of-core" counterparts to
support matrices/systems that are too large to fit in main memory. Our current CM5
implementation of the SDC algorithms does not use any of the out-of-core routines, but in
principle our algorithms will permit out-of-core solutions to be used.
Figure
4 summarizes the performance of the CMSSL routines underlying this implementation
Algorithm 1.
We tested the Newton-Schulz iteration based algorithm for computing the spectral decomposition
along the pure imaginary axis, since matrix multiplication can be twice as
fast as matrix inversion; see Figure 4. The entries of random test matrices were uniformly
distributed on [\Gamma1; 1]. We use the inequality kA
n as switching criterion
from the Newton iteration (2.5) to the Newton-Schulz iteration (2.6), i.e., we relaxed the
convergence condition
for the Newton-Schulz iteration to
because this optimized performance over the test cases we ran.
Table
3 shows the measured results of the backward accuracy, total CPU time and
megaflops rate. The second column of the table is the backward error, the number of
Newton iterations and the number of the Newton-Schulz iterations, respectively. From
the table, we see that by comparing to CMSSL 3.2 matrix multiplication performance, we
obtain 32% to 45% efficiency with the matrices sizes from 512 to 2048, even faster than the
CMSSL 3.2 matrix inverse subroutine.
We profiled the total CPU time on each phase of the algorithm, and found that about
83% of total time is spent on the Newton iteration, 9% on the QR decomposition with pivot-
4Actual Predicted GEMM- Inverse-
(iter1, iter2) (seconds) (seconds) (total) (per node) (per node) (per node)
Table
3: Backward accuracy, timing in seconds and megaflops of the SDC algorithm with
Newton-Schulz iteration on a 32 PEs with VUs CM-5.
ing, and 7.5% on the matrix multiplication for the Newton-Schulz iteration and orthogonal
transformations.
Performance Model
Our model is based on the actual operation counts of the ScaLAPACK implementation and
the following problem parameters and (measured) machine parameters.
Matrix size
p Number of processors
b Block size (in the 2D block cyclic matrix data layout) [20]
lat Time required to send a zero length message from one processor to another.
- band Time required to send one double word from one processor to another.
- DGEMM Time required per BLAS3 floating point operation
Models for each of the building blocks are given in Table 4. Each model was created
by counting the actual operations in the critical path. The load imbalance cost represents
the discrepancy between the amount of work which the busiest processor must perform and
the amount of work which the other processors must perform. Each of the models for the
building blocks were validated against the performance data shown in the appendix. The
load imbalance increases as the block size increases.
Because it is based on operation counts, we can not only predict performance, but also
estimate the importance of various suggested modifications either to the algorithm, the
implementation or the hardware. In general, predicting performance is risky because there
are so many factors which control actual performance, including the compiler and various
library routines. However, since the majority of the time spent in Algorithm 1 is spent in
either the BLACS or the level 3 PB-BLAS[15] (which are in turn implemented as calls to the
BLACS[25] and the BLAS[38, 23, 22]), as long as the performance of the BLACS and the BLAS
Computation Communication Cost Load Imbalance Cost
Task Cost latency bandwidth \Gamma1 computation bandwidth \Gamma1
TRI 4n 3
Matrix
multiply
p- lat (1+ lg p
Householder
application
Table
4: Models for each of the building blocks
inversions applications
Computation cost \Theta n 3
Latency cost \Thetan- lat 160+20 lg p 3 lg p 160+23lg p
Bandwidth cost \Theta n 2
Imbalanced
computation cost \Theta bn 2
Imbalanced
bandwidth cost \Thetabn- band 20+35 lg p 20+35 lg p
Table
5: Model of Algorithm 1
are well understood and the input matrix is not too small, we can predict the performance
of Algorithm 1 on any distributed memory parallel computer. In Table 5, the predicted
running time of each of the steps of Algorithm 1 is displayed. Summing the times in Table 5
yields:
Using the measured machine parameters given in Table 8 with equation (4.9) yields the predicted
times in Table 7 and Table 3. To get Table 4 and Table 5 and hence equation (4.9), we
have made a number of simplifying assumptions based on our empirical results. We assume
that 20 Newton iterations are required. We assume that the time required to send a single
message of d double words is - lat regardless of how many messages are being sent
in the system. Although there are many patterns of communication in the ScaLAPACK
implementation, the majority of the communication time is spent in collective communica-
tions, i.e. broadcasts and reductions over rows or columns. We therefore choose - lat and
band based on programs that measure the performance of collective communications. We
assume a perfectly square p p-by-
processor grid. These assumptions allow us to keep
the model simple and understandable, but limit its accuracy somewhat.
Table
Performance of the Newton iteration based algorithm (Algorithm 1) for the spectral
decomposition along the pure imaginary axis, all backward errors
(sec) (total) (sec) (total) (sec) (total)
2000
3000
Table
7: Predicted performance of the Newton iteration based algorithm (Algorithm 1) for
the spectral decomposition along the pure imaginary axis.
actual predicted actual predicted actual predicted
time
2000 502.57 444.3 448.69 362.3 336.34 310.8
3000 1037.03 994.7 792.18 756.8 576.68 610.4
As
Tables
6 and 7 show, our model underestimates the actual time on the Delta by no
more than 20% for the machine and problem sizes that we timed. Table 3 shows that our
model matches the performance on the CM5 to within 25% for all problem sizes except the
smallest, i.e.
The main sources of error in our model are:
1. uncounted operations, such as small BLAS1 and BLAS2 calls, data copying and norm
computations,
2. non-square processor configurations,
3. differing numbers of Newton iterations required
4. communications costs which do not fit our linear model,
5. matrix multiply costs which do not fit our constant cost/flop model, and
6. the higher cost of QR decomposition with pivoting.
We believe that uncounted operations account for the main error in our model for small
n. The actual number of Newton iterations varies between
exactly 20 Newton iterations are needed. Non-square processor configurations are slightly
less efficient than square ones. Actual communication costs do not fit a linear model and
depend upon the details such as how many processors are sending data simultaneously and
to which processors they are sending. Actual matrix multiply costs depend upon the matrix
Model Performance measured values -s
Parameter Description limited by CM5
- DGEMM BLAS3 peak flop rate 1/90. 1/34.
lat message latency comm. software 150 157
Table
8: Machine parameters
sizes involved, the leading dimensions and the actual starting locations of the matrices. The
cost of any individual call to the BLACS or to the BLAS may differ from the model by 20%
or more. However, these differences tend to average out over the entire execution.
Data layout, i.e. the number of processor rows and processor columns and the block
size, is critical to the performance of this algorithm. We assume an efficient data layout.
Specifically that means a roughly square processor configuration and a fairly large block
size (say 16 to 32). The cost of redistributing the data on input to this routine would be
tiny, O((n 2 =p)- band ), compared to the total cost of the algorithm.
The optimal data layout for LU decomposition is different from the optimal data layout
for computing U . The former prefers slightly more processor rows than columns while
the latter prefers slightly more processor columns than rows. In addition, LU decomposition
works best with a small block size, 6 on the Delta for example, whereas computing
U best done with a large block size, 30 on the Delta for example. The difference
is significant enough that we believe a slight overall performance gain, maybe 5% to 10%,
could be achieved by redistributing the data between these two phases, even though this
redistribution would have to be done twice for each Newton step.
Table
3 shows that except for n ! 512 our model estimates the performance Algorithm
1 based on CMSSL reasonably well. Note that this table is for a Newton-Shultz iteration
scheme which is slightly more efficient on the CM5 than the Newton based iteration. This
introduces another small error. The fact that our model matches the performance of the
CMSSL based routine, whose internals we have not examined, indicates to us that the implementation
of matrix inversion on the CM5 probably requires roughly the same operation
counts as the ScaLAPACK implementation.
The performance figures in Table 8 are all measured by an independent program, except
for the CM5 BLAS3 performance. The communication performance figures for the Delta in
Table
8 are from a report by Littlefield 1 [43]. The communication performance figures for the
CM5 are as measured by Whaley 2 [58]. The computation performance for the Delta is from
the Linpack benchmark[21] for a 1 processor Delta. There is no entry for a 1 processor CM5
in the Linpack benchmark, so - DGEMM in Table 8 above is chosen from our own experience.
Graphical User Interface to SDC
To take advantage of the graphical nature of the spectral decomposition process, a graphical
user interface (GUI) has been implemented for SDC. Written in C and based on X11R5's
1 The BLACS use protocol 2, and the communication pattern most closely resembles the "shift" timings.- lat is from Table 8 in[58] and -band is from Table 5.
CODE
XI
CODE
USER
PARALLEL EXECUTION
Interface of 7 routines
Figure
5: The X11 Interface (XI) and SDC
standard Xlib library, the Xt toolkit and MIT's Athena widget set, it has been nicknamed
XI for "X11 Interface". When XI is paired with code implementing SDC we call the union
XSDC.
The programmer's interface to XI consists of seven subroutines designed independently
of any specific SDC implementation. Thus XI can be attached to any SDC code. At present,
it is in use with the CM-5 CMF/CMSSL implementation and the Fortran 77 version of our
algorithm (both of which use real arithmetic only). Figure 1 shows the coupling of the SDC
code and the XI library of subroutines.
Basically, the SDC code calls an XI routine which handles all interaction with the user
and returns only when it has the next request for a parallel computation. The SDC code
processes this request on the parallel engine, and if necessary calls another XI routine to
inform the user of the computational results. If the user had selected to split the spectrum,
then at this point the size of the highlighted region, and the error bound on the computation
(along with some performance information) is reported, and the user is given the choice of
confirming or refusing the split. Appropriate action is taken depending on the choice. This
process is repeated until the user decides to terminate the program.
All data structures pertaining to the matrix decomposition process are managed by
XI. A binary tree records the size and status (solved/not solved) of each diagonal block
corresponding to a spectral region, the error bounds of each split, and other information.
Having the X11 interface manage the decomposition data frees the SDC programmer of
these responsibilities and encapsulates the decomposition process. The SDC programmer
obtains any useful information via the interface subroutines.
Figure
6 pictures a sample session of xsdc on the CM-5 with a 500 \Theta 500 matrix. The
large, central window (called the "spectrum window") represents the region of the complex
plane indicated by the axes. Its title - "xsdc :: Eigenvalues and Schur Vectors" - indicates
that the task is to compute eigenvalues and Schur vectors for the matrix under analysis.
Figure
sample xsdc session
The lines on the spectrum window (other than the axes) are the result of spectral divide
and conquer, while the shading indicates that the "bowtie" region of the complex plane is
currently selected for further analysis. The other windows (which can be raised/lowered at
the user's request) show the details of the process and will be described later.
The buttons at the top control I/O, the appearance of the spectrum window, and algorithmic
choices:
ffl File lets one save the matrix one is working on, start on a new matrix, or quit.
ffl Zoom lets one navigate around the complex plane by zooming in or out on part of the
spectrum window.
Toggle turns on or off the features of the spectrum window (for example the axes,
Gershgorin disks, eigenvalues).
ffl Function lets one modify the algorithm, or display more or less detail about the
progress being made.
The buttons at the bottom are used in splitting the spectrum. For example clicking
on Right halfplane and then clicking at any point on the spectrum window will split the
spectrum into two halfplanes at that point, with the right halfplane selected for further
division. This would signal the SDC code to decompose the matrix A to
k A 11 A 12
where the k eigenvalues of A 11 are the eigenvalues of A in the right halfplane, and the
eigenvalues of A 22 are the eigenvalues of A in the left halfplane. The button Left Halfplane
works similarly, except that the left halfplane would then be selected for further processing
and the roles of A 11 and A 22 would be reversed. In the same manner, Inside Circle and
Outside Circle divide the complex plane at the boundary of a circle, while East-West
Crosslines and North-South Crosslines split the spectrum with lines at 45 degrees to
the real axis (described below).
The Split Information window in the lower right corner of Figure 2 keeps track of
the matrix splitting process. It reports the two splits performed to arrive at this current
(shaded) spectral region. The first, an East-West Crossline split at the point 1.5 on the
real axis, divided the entire complex plane into four sectors by drawing two lines at \Sigma 45
degrees through the point 1.5 on the real axis. SDC decomposed the starting matrix into:
260 A 11 A 12
where the East and West sectors correspond to the A 11 block while the North and South
sectors correspond to the A 22 block.
Continuing in the East-West sectors as indicated by the previous split, that region is
divided into two sub-regions separated by the boundary of the circle of radius 4 centered
at the origin. The circle is drawn, making sure that its boundary only intersects the East
and West sectors, and the matrix is reduced to:B
@
106 154 240
106 A 11 A 12 A 13
A
The shading indicates that the "bowtie" region (corresponding to the interior of the circle,
and the A 11 block) is currently selected for further analysis.
In the upper right corner of Figure 2 the Matrix Information window displays the
status of the matrix decomposition process. Each of the three entries corresponds to a
spectral region and a square diagonal block of the 3 \Theta 3 block upper triangular matrix,
and informs us of the block's size, whether its eigenvalues (eigenvectors, Schur vectors)
have been computed or not, and the maximum error bound encountered along this path
of the decomposition process. The highlighted entry corresponds to the shaded region and
reports that the A 11 block contains 106 eigenvalues, has been solved, and is in error by up
to 1:44 \Theta 10 \Gamma13 . The eigenvalues - listed in the window overlapping the Matrix Information
window - can be plotted on the spectrum at the user's request.
The user may select any region of the complex plane (and hence any sub-matrix on the
diagonal) for further decomposition by clicking the pointer in the desired region. A click at
the point 10 on the imaginary axis for example, would unhighlight the current region and
shade the North and South sectors. Since this region corresponds to the A 33 block, the third
entry in the Matrix-Information window would be highlighted. The Split-Information
window would also be updated to detail the single split performed in arriving at this region
of the spectrum.
Once a block is small enough, the user may choose to solve it (via the Function button
at the top of the spectrum window). In this case the eigenvalues, and Schur vectors for that
block would be computed using QR (as per the user's request) and the eigenvalues plotted
on the spectrum.
The current XI code supports real SDC only. It will be extended to handle the complex
case as implementations of complex SDC become available.
6 Conclusions and Future work
We have written codes that solve one of the hardest problems of numerical linear algebra:
spectral decomposition of nonsymmetric matrices. Our implementation uses only highly
efficient matrix computation kernels, which are available in the public domain and from
distributed memory parallel computer vendors. The performance attained is encouraging.
This approach merits consideration for other numerical algorithms.
The object oriented user interface XI developed in this paper provides a paradigm for
us in the future to design a more user friendly interface in the massively parallel computing
environment.
We note that all the approaches discussed here can be extended to compute the both
right and left deflating subspaces of a regular matrix pencil A \Gamma -B. See [4, 7] for more
details.
As the spectrum is repeatedly partitioned in a divide-and-conquer fashion, there is
obviously task parallelism available because of the independent submatrices that arise, as
well as the data parallel-like matrix operations considered in this paper. Analysis in [13]
indicates that this task parallelism can contribute at most a small constant factor speedup,
since most of the work is at the root of the divide-and-conquer tree. This can simplify the
implementation.
Our future work will include the implementation and performance evaluation of the
based algorithm, comparison with parallel QR, the extension of the
algorithms to the generalized spectral decomposition problem, and the integration of the
3-step approach (see section 2.3) to an object oriented user interface.
Acknowledgements
Bai and Demmel were supported in part by the ARPA grant DM28E04120 via a subcontract
from Argonne National Laboratory. Demmel and Petitet were supported in part by NSF
grant ASC-9005933, Demmel, Dongarra and Robinson were supported in part by ARPA
contact DAAL03-91-C-0047 administered by the Army Research Office. Ken Stanley was
supported by an NSF graduate student fellowship. Dongarra was also supported in part by
the Office of Scientific Computing, U.S. Department of Energy, under Contract DE-AC05-
84OR21400.
This work was performed in part using the Intel Touchstone Delta System operated
by the California Institute of Technology on behalf of the Concurrent Supercomputing
Consortium. Access to this facility was provided through the Center for Research on Parallel
Computing.
--R
Generalized QR factorization and its applictions.
on parallelizable eigensolvers.
Design of a parallel nonsymmetric eigenroutine toolbox
on a block implementation of Hessenberg multishift QR it- eration
Design of a parallel nonsymmetric eigenroutine toolbox
Inverse free parallel spectral divide and conquer algorithms for nonsymmetric eigenproblems.
A computational method for eigenvalue and eigen-vectors of a matrix with real eigenvalues
A divide and conquer method for tridiagonalizing symmetric matrices with repeated eigenvalues.
Circular dichotomy of the spectrum of a matrix.
Numerical stability and instability in matrix sign function based algorithms.
Solving the algebraic Riccati equation with the matrix sign function.
on the benefit of mixed parallelism.
Rank revealing QR factorizations.
PB-BLAS: A set of Parallel Block Basic Linear Algebra Subprograms.
PUMMA: Parallel universal matrix multiplication algorithms on distributed memory concurrent computers.
A note on the homotopy method for linear algebraic eigenvalue problems.
A parallel nonlinear least-squares solver: Theoretical analysis and numerical results
Trading off parallelism and numerical stability.
Parallel numerical linear algebra.
Performance of various computers using standard linear equations soft- ware
A set of Level 3 Basic Linear Algebra Subprograms.
An Extended Set of FORTRAN Basic Linear Algebra Subroutines.
A parallel algorithm for the non-symmetric eigenvalue problem
A Users' Guide to the BLACS.
The design of linear algebra libraries for high performance computers.
The multishift QR algorithm: is it worth the trouble?
on the Schur decomposition of a matrix for parallel computation.
Finding eigenvalues and eigenvectors of unsymmetric matrices using a distributed memory multiprocessor.
Problem of the dichotomy of the spectrum of a matrix.
An efficient algorithm for computing a rank-revealing QR de- composition
Parallelizing the QR algorithm for the unsymmetric algebraic eigenvalue problem: myths and reality.
Computing the polar decomposition - with applications
The rank revealing QR and SVD.
The sign matrix and the separation of matrix eigenvalues.
A parallel implementation of the invariant subspace decomposition algorithm for dense symmetric matrices.
Rational iteration methods for the matrix sign function.
Basic Linear Algebra Subprograms for Fortran usage.
Solving eigenvalue problems of nonsymmetric matrices with real homotopies.
The Touchstone
A parallel algorithm for computing the eigenvalues of an unsymmetric matrix on an SIMD mesh of processors.
Characterizing and tuning communication performance on the Touchstone DELTA and iPSC/860.
Parallel algorithm for solving some spectral problems of linear algebra.
Some aspects of generalized QR factorization.
Linear model reduction and solution of the algebraic Riccati equation.
on Jacobi and Jacobi-like algorithms for a parallel computer
A parallel algorithm for the eigenvalues and eigenvectors of a general complex matrix.
A Jacobi-like algorithm for computing the Schur decomposition of a non-Hermitian matrix
A parallel implementation of the QR algorithm.
Updating a rank-revealing ULV decomposition
CM Fortran Reference Manual
The Connection Machine CM-5 Technical Summary
CMSSL for CM Fortran: CM-5 Edition
Implementing the QR Algorithm on an Array of Processors.
Efficient parallel implementation of the nonsymmetric QR algorithm.
Shifting strategies for the parallel QR algorithm.
Basic linear algebra communication subroutines: Analysis and implementation across multiple parallel architectures.
--TR
--CTR
Peter Benner , Enrique S. Quintana-Ort , Gregorio Quintana-Ort, State-space truncation methods for parallel model reduction of large-scale systems, Parallel Computing, v.29 n.11-12, p.1701-1722, November/December
Peter Benner , Ralph Byers , Rafael Mayo , Enrique S. Quintana-Ort , Vicente Hernndez, Parallel algorithms for LQ optimal control of discrete-time periodic linear systems, Journal of Parallel and Distributed Computing, v.62 n.2, p.306-325, February 2002
Leo Chin Sim , Graham Leedham , Leo Chin Jian , Heiko Schroder, Fast solution of large N N matrix equations in an MIMD-SIMD hybrid system, Parallel Computing, v.29 n.11-12, p.1669-1684, November/December
Peter Benner , Maribel Castillo , Enrique S. Quintana-Ort , Vicente Hernndez, Parallel Partial Stabilizing Algorithms for Large Linear Control Systems, The Journal of Supercomputing, v.15 n.2, p.193-206, Feb.1.2000 | spectral divide-and-conquer;invariant subspaces;nonsymmetric matrices;ScaLAPACK;parallelizable;eigenvalue problem;spectral decomposition |
271753 | The Influence of Interface Conditions on Convergence of Krylov-Schwarz Domain Decomposition for the Advection-Diffusion Equation. | Several variants of Schwarz domain decomposition, which differ in the choice of interface conditions, are studied in a finite volume context. Krylov subspace acceleration, GMRES in this paper, is used to accelerate convergence. Using a detailed investigation of the systems involved, we can minimize the memory requirements of GMRES acceleration. It is shown how Krylov subspace acceleration can be easily built on top of an already implemented Schwarz domain decomposition iteration, which makes Krylov-Schwarz algorithms easy to use in practice. The convergence rate is investigated both theoretically and experimentally. It is observed that the Krylov subspace accelerated algorithm is quite insensitive to the type of interface conditions employed. | Introduction
We consider domain decomposition for the two-dimensional advection-diffusion equation with
application to a boundary conforming finite volume incompressible Navier-Stokes solver in mind,
see [27, 6, 20]. Therefore, our interests are more practical than theoretical. An advantage of the
boundary conforming approach is that the structures of the systems of equations that arise are
known beforehand. This enables us to develop efficient iterative solvers that can be vectorized
with relative ease. A disadvantage is that the boundary conforming approach is not suitable for
domains not topologically rectangular. Domain decomposition is used to overcome this problem.
The name Krylov-Schwarz refers to methods in which a Schwarz type domain decomposition
iteration is accelerated by a Krylov subspace method (such as CG or GMRES), see the preface
of [17]. Equivalently, Krylov-Schwarz means that Schwarz domain decomposition is used as preconditioner
for a Krylov subspace method. In this paper we use the GMRES method because of
non-symmetry of the discretized advection-diffusion equation.
Applied Analysis Group, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands,
(e.brakkee@math.tudelft.nl)
y Applied Analysis Group, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands,
(p.wilders@math.tudelft.nl)
Schwarz domain decomposition considered here uses a minimal overlap and no coarse grid
correction. Algebraically, this algorithm is formulated as a block iteration applied to a (possibly
augmented) matrix, see [14, page 372] and [25]. Theory [28] and experiments [9] show that both
a constant overlap in physical space and a coarse grid correction are needed to obtain constant
iteration count when the mesh is refined and the number of subdomains increases. Examples
of constant overlap in physical space can be found in [15, 22, 30]. However, despite that large
overlap gives better convergence rates, minimal overlap typically leads to lower computing times,
see [13, 11, 9], even for large and ill-conditioned problems. The reason is that with minimal
overlap, there is less duplication of work in overlap regions. Methods with small overlap are also
much easier to implement for practical complicated problems.
A coarse grid correction [18, 8, 1, 2] can be quite effective for improving convergence of domain
decomposition. However, as noted in [23, pp. 188], for small numbers of subdomains the
additional cost in forming and solving the coarse grid problem outweighs the reduction of the
number of iterations it gives. Moreover, in large codes used for engineering computations coarse
grid correction is very difficult to implement.
Similar to [12] and [25, 24], we study the influence of interface conditions on convergence rate.
A unified treatment of nonoverlapping Neumann-Dirichlet and Schwarz methods with minimal
overlap is not straightforward. An example at the analytic level can be found in [23, page 207].
Examples at the algebraic level are in [25, 24], where complicated augmentations of the discretization
matrix are needed. This paper unifies Neumann-Dirichlet and Schwarz methods in a simpler
way: by premultiplying the discretization matrix with a so-called influence matrix, which has a
very simple structure.
Our acceleration method differs from [12] in that we do not use relaxation but GMRES ac-
celeration. In [25, 24] good convergence was obtained with the unaccelerated Schwarz algorithm
by using optimized interface conditions. Instead, we do not use optimization, but investigate the
effect of some simple strategies for choosing interface conditions on the convergence rate of the
Krylov-Schwarz domain decomposition algorithm. We find that the accelerated method is quite
insensitive to the types of interface conditions used.
It is well-known that GMRES requires much storage if the vector length is large. In standard
Krylov-Schwarz algorithms, the full vector length is used. We will show, that, if subdomain
problems are solved accurately, one can reduce the vector length in GMRES. In fact, GMRES
then solves a reduced system, which consists of equations with only unknowns on or near the
interfaces.
Discretization
We consider the 2-D advection-diffusion equation written in general coordinates:X
@
a i @u
Equation (1) is obtained after a boundary-fitted coordinate
transformation to a rectangular
are interested in (1) because
it is a realistic model for the momentum equations occurring in computational fluid dynamics for
incompressible flows.
We discretize (1) using either cell-centered or vertex-centered finite volumes with a central
approximation of the advection terms on a uniform grid with mesh sizes h 1 and h 2 , which leads
to a 9-point discretization molecule. The resulting system of discretized equations is denoted by
3 Domain decomposition
In practice, complicated flow domains are used and one needs complete freedom in decomposing
the domain into subdomains. However, for the present model study we investigate only a rectan-
gular
it into a rectangular array of (nonoverlapping)
subdomains, see Figure 1. The difference between cell-centered and vertex-centered finite-volumes
is that in the latter case there are nodal points on the interfaces between subdomains. Following
[25], the unknowns at these nodal points are copied into each subdomain, see Figure 2.b. This
means that most nodal points on the interface are doubled into a left and right unknown. At cross
points, four copies are present. The respective equation is repeated for each copied unknown,
adding zeroes in the discretization matrix at positions associated with copies from other domains.
This is in contrast to [25] where the coefficients of the discretization are also adapted. The latter
mechanism leads to subdomain problems with the desired types of boundary conditions on the
interfaces. As opposed to [25], we use premultiplication with the influence matrix, see further on.
For the analysis and description of the algorithm we restrict ourselves to two non-overlapping
Some details on the multi-domain case
will be given.
\Omega 9
\Omega 6
Figure
1: Decomposition of a
domain\Omega into 3 \Theta 3 domains of each 6 \Theta 6 cells. The global grid is
uniform
We find the following way to describe domain decomposition convenient. Replace (2) by
The matrix M in (3) is called the influence matrix and is used to obtain various types of coupling
conditions on the interfaces. Section 3.1 describes M in detail. The use of the matrix M leads to
a 12 point molecule for the first layer of points near the interface, see Figure 2.
Define the disjoint index sets I j such that i 2 I j if u i
This definition of I j is also valid
for the vertex-centered case because the unknowns at the interface are doubled in that case. Both
A and -
f are partitioned into blocks according to these index sets I 1 and I 2 . Eq. (3) then becomes
A 11
A 12
A 21
A 22
Schwarz domain decomposition is a block Gauss-Seidel or Jacobi iteration applied to (4),
with
A 22
A 21 gives block Gauss-Seidel and Jacobi. The algorithm (5) is a
Schwarz domain decomposition algorithm with a minimal (O(h)) overlap. Following [14, p. 370],
we classify the algorithm as non-overlapping. The Gauss-Seidel variant corresponds to multiplicative
Schwarz and the Jacobi variant, which is suitable for parallelization, corresponds to additive
Schwarz. Section 3.3 describes Krylov subspace acceleration of this Schwarz algorithm.
3.1 The influence matrix
The purpose of the influence matrix M , introduced in (3), is to give a unified framework to
treat both classical Schwarz and Neumann-Dirichlet algorithms. The different types of interface
conditions are obtained by varying parameters in M . This study only uses a one-parameter
coupling, but extensions to more parameters are also possible.
(a) (b)
Figure
2: Interface variables: cell-centered (a) and vertex-centered (b)
Let I denote the index set of all variables and let J be the subset of I containing the indices of
two rows of variables near the interface, see Figure 2. These variables are called interface variables
and play an important role in the next sections. Let K denote the subset of J containing the
indices of variables in the first layer on either side of the interface \Gamma. The influence matrix M
takes linear combinations of discretization molecules associated with unknowns in K, and thus
influences the discretization at points in K. The influence matrix is defined as follows:
1.
2. do not correspond to the same subdomain.
Furthermore, we restrict M by allowing nonzero M ij only at points i and j directly opposite each
other with respect to the interface. This means that we omit any tangential dependencies in M
and consider only a single normal dependencies. This leads to a 1-parameter coupling.
For the situation of two subdomains in Figure 2, the matrix M has at most two non-zero
entries at each row i 2 K: one at M ii = 1 and the other at . The subscript i shows
that - may vary along the interface. The parameter - ij depends on the coupling strategy used.
Of course, we also have M so that
Invertibility of the influence matrix M is ensured by the condition
In the general multi-domain case, cross-points can occur. At cross-points (corner points), the
influence matrix M has at most three non-zero entries: M
at one interface and M ij 2
at the other.
Some interesting choices for - are:
Neumann
Dirichlet
is the normal mesh P'eclet number defined by
with the coordinate direction corresponding to the normal direction. For the present
model study, we have, for vertical interfaces,
with depending on which subdomain i corresponds to. This means that the normal
mesh P'eclet numbers as seen from the different blocks always have opposite signs.
In [7], the choices listed in (9) are worked out in detail for the constant coefficients case with
In that case, the discretized subdomain problems are identical to the ones obtained
by imposing Neumann (- N
ij ) or Dirichlet (- D
conditions at the interface and by discretizing using
the finite volume method.
At 0, the use of a Dirichlet condition is required to obtain well-posed subdomain
problems. However, the corresponding - D
ij is singular for \Gamma2. Therefore, we have omitted
a further study of this choice and use - S
ij instead. In the vertex-centered case, we can avoid this
problem by taking
This leads to a relation between the first layer of left and right unknowns only, which is equivalent
to a direct Dirichlet update. In this way, we get a vertex-centered finite volume version of the
Neumann-Dirichlet algorithm studied in [19], by taking
In general one wants to vary the type of interface condition depending on the local flow parameters
We choose
parameter that can vary along the interface. The basic Schwarz iteration is obtained
with
The choice (14) is referred to as the Schwarz-Schwarz (S-S) algorithm. Neumann-Schwarz (N-S)
is obtained with ae
In this case the transition is discontinuous. The Robin-Schwarz (R-S) method uses a smooth
transition, with
This paper takes p 2. Note that the three strategies for domain decomposition (14), (15)
and (16) all reduce to Schwarz iteration when applied to the Poisson equation.
3.2 The interface equations
The Krylov-Schwarz algorithm corresponds to a Krylov subspace method for solving the following
preconditioned system
where N is the block lower triangular or block diagonal matrix from (6). We will show that we
can reduce the system (17) to a smaller system concerning only interface unknowns u i with
see
Figure
2. Let denote the vector of interface variables and let
the remaining variables.
the following injection operators:
I the trivial injection operator from w into u defined by
else (18)
I the trivial injection operator from v into u defined by
ae
The following theorem provides useful information about the structure of a matrix.
Theorem 1 If the matrix
A satisfies the following property
I \Theta J; (20)
then after ordering u such that
, the system (17) becomes
I P
f
Proof:
or -
f .
By premultipling with N \Gamma1 , we get
and by premultiplying the result with
\Theta P Q
, we get
f
f
by ordering the components of u as pre-
scribed. 2
Since the influence matrix has nonzero entries only for the first layer of unknowns near the
interface, and the discretization is a 9-point molecule, it can be shown that the matrix N \Gamma A can
only have nonzero elements at positions (i; I \Theta J ; the proof of this is omitted. Therefore, the
matrices N and -
A of this paper satisfy the conditions of Theorem 1.
The block form of (21) shows that v can, in principle, be solved for independently of w, by
solving the system of interface equations
A numerical example of (24) can be found in [16] for a small one-dimensional Poisson problem.
Note that we assume accurate subdomain solution: that is
corresponding
to the same domain. With inaccurate subdomain solution, we can get
I 2 J which violates the assumption of Theorem 1.
From (21) we see that the matrix N
A in (17) has an eigenvalue
equal to the number of non-interface variables and that all the other eigenvalues are shared with
the interface equations (24). This property means that the interface equations (24) have the same
spectrum as the preconditioned system (17), apart from - = 1. Hence, (24) does not need to be
preconditioned further.
The approach is strongly related to the Schur complement method, typical of finite element
methods, in which subdomain problems are solved exactly and domain decomposition also amounts
to solving an interface problem. In [4, 10], it is shown that the multiplicative overlapping Schwarz
method is equivalent to a Schur complement method with appropriate block preconditioner.
In [29], a different proof of this is given for the nonoverlapping method of the present paper.
The interface problem is somewhat different from that arising in Schur complement methods because
in our terminology, the interface unknowns do not (all) reside on the interface. Furthermore,
our method is more general because it can be applied to finite volume/difference methods as well.
3.3 Krylov acceleration of the Schwarz method
Since domain decomposition methods in general tend to converge slowly if at all, an acceleration
technique is needed. We use the GMRES [21] Krylov subspace method to solve the
interface equations (24). To solve (24), all that is required is a method to compute the interface
matrix-vector product Q
v. Similar to methods based on Schur's complement, see for
instance [3], it is not necessary to form the matrix of the interface equations explicitly.
It turns out that a single iteration of the unaccelerated algorithm (5) can be used to obtain
the interface matrix-vector product for Q
AQ. This enables a step-wise implementation of
accelerated domain decomposition, which is of major importance for complicated CFD codes.
Also, the required vector length in GMRES is quite small because only a small system of interface
equations must be solved. This makes the approach practical for large problems.
Given the implementation of unaccelerated domain decomposition (5), we can compute -
f if u is given. Because of the property that
I \Theta J ,
we have if
f . Furthermore, if we introduce -
u,
we get
f
The injection Q and restriction Q T are easily implemented as subroutines, so that computation of
given v is easy. If we define
f . The problem
to be solved is rewritten as
Given the initial guess v 0 , GMRES acceleration proceeds as follows:
1. Compute the right-hand side
2. Solve the problem Q
using GMRES with initial guess
product is computed from Q
3.
The final inner subdomain solutions collected in the vector u are computed by doing a last domain
decomposition iteration with the computed interface solution v:
f .
Theoretical background
Some theoretical analysis is possible under simplified conditions. In equation (1) we assume
field with a 1 ; a 2 - 0. The boundary conditions are
periodic in the y-direction. On the left boundary (inflow) we prescribe a Dirichlet condition and
on the right boundary (outflow) a homogeneous Neumann condition. We take h and we
split the domain in two parts with a vertical interface in the middle.
To obtain convergence estimates for GMRES accelerated domain decomposition, we use theorem
5 of [21], from which it follows that for GMRES without restart
for some K ? is the number of eigenvalues of B with non-positive real parts and the
other eigenvalues are enclosed in a circle centered at C ? 0 with radius R ! C. In practice,
formula (26) may give only a crude upper bound on convergence, especially if the spectrum is not
evenly distributed but consists of a few clusters of eigenvalues, see [26]. However, it turns out that
for our problems, (26) is a good estimate.
Using Fourier analysis in the y-direction we can obtain the eigenvalues of the iteration matrix
A of the multiplicative (block Gauss-Seidel) algorithm. Some straightforward but
tedious calculations give the eigenvalues of E in closed, but difficult to analyze form, see [7,
Appendix
A]. For brevity we omit the details. The eigenvalues of the matrix
coincide with those of N
(apart from the multiple eigenvalue
leading
to an estimate of ae in formula (26).
To compare the S-S, N-S, R-S and N-D (only vertex-centered) algorithms, we compute the
theoretical convergence rates for different ranges of flow magnitudes and flow angles. The flow
magnitude is given by the dimensionless mesh-P'eclet number with jaj the magnitude
of velocity, the diffusion coefficient. The flow angle is given by ff,
so ae a
h pmax sin ff; (27)
with flow normal to the interface and flow tangential to the interface.
The average theoretical convergence rates 1 over (p
are listed in Tables 1 and 2. The cell-centered results from Table 1 show that
P'eclet range S-S N-S R-S
Table
1: Average theoretical convergence rates and corresponding number of iterations (in brack-
ets) for the multiplicative algorithm to solve with relative accuracy of 10 \Gamma4 , for different mesh
P'eclet ranges averaged over all flow angles. Cell-centered discretization.
for small flow magnitudes, the Neumann-Schwarz algorithm is approximately twice as fast as
the Schwarz-Schwarz and Robin-Schwarz algorithms. The advantages of the Neumann-Schwarz
algorithms are much smaller for larger flow magnitudes. The results indicate that Neumann-
Schwarz is the best choice for the Poisson equation. However, because of symmetry of that
equation, it is not certain at what sides of the interfaces, Neumann conditions must be imposed.
This problem becomes important for small flow magnitudes when p ij changes sign along the
interface.
P'eclet range S-S N-S R-S N-D
Table
2: Average theoretical convergence rates and corresponding number of iterations (in brack-
ets) for the multiplicative algorithm to solve with relative accuracy of 10 \Gamma4 , for different mesh
P'eclet ranges averaged over all flow angles. Vertex-centered discretization.
The vertex-centered results from Table 2 show much smaller differences between the different
methods as the cell-centered results from Table 1. With vertex-centered discretization, the differences
are also small for small flow magnitudes. The Neumann-Dirichlet method from [19] has
similar convergence as the Neumann-Schwarz method at low flow magnitudes. However, at larger
flow magnitudes, the Neumann-Dirichlet has a worse convergence rate.
The next section compares the above theoretical results with experiments.
5 Numerical experiments
This section presents some numerical experiments and compares the influence of interface conditions
(Schwarz-Schwarz, Neumann-Schwarz, Robin-Schwarz, Neumann-Dirichlet) on convergence
of Krylov subspace accelerated Schwarz domain decomposition.
1 The N-S algorithm was modified so that also for flow tangential to the interface, Neumann and Schwarz
conditions were used instead of the S-S algorithm
We use a relative stopping criterion
with r m the residual after m iterations defined by
The experimental convergence rate ae(m) is computed from
Some care must be taken interpreting convergence rates. Large differences in convergence
rates do not necessarily indicate large differences in the number of iterations. For instance, to reach
a relative accuracy of 10 \Gamma4 with ae = 0:1 we need 4 iterations while with ae = 0:2 we need only
iterations more. On the other hand, with ae close to 1, small differences are very important:
the difference between ae = 0:98 and ae = 0:99 is a factor of two in iteration count.
All experiments take the initial guess v We use the Sparse 2 solver for solving the sub-domain
problems. All results in this section are for the multiplicative method, which, in our
experience, converges about twice as fast as the additive method. Restarted GMRES(20) was
used for Krylov subspace acceleration of Schwarz domain decomposition.
5.1 Convergence as a function of flow magnitude and angle
Results are given for a
divided into two blocks by a vertical interface at
. A uniform mesh of 40 \Theta 40 cells
on\Omega is used. In the numerical results, a Dirichlet condition
is enforced on the left and lower boundaries
of\Omega and a Neumann condition
is enforced on the right and upper boundaries. The right-hand side is f = 1. All coefficients
in (1) are assumed to be constant with k As in Section 4, we compute
the (experimental) convergence rate as a function of flow magnitude pmax and angle ff. Similar
to Section 4, we have modified the Neumann-Schwarz method so that Neumann and Schwarz
conditions are always used for block 1 and 2 respectively, even for flow tangential to the interface.
This is different from the description of Neumann-Schwarz (15) but enables a comparison of the
effect of different coupling conditions on convergence.
To demonstrate the quality of the theoretical prediction of convergence rates, Figure 3 shows, as
an example, a comparison of experimental and theoretical convergence rates for the multiplicative
Neumann-Schwarz algorithm. The theoretical convergence rates agree well with the experimental
ones. Note that the convergence rate is zero along the curve pmax cos 2. This is not a property
of the domain decomposition algorithm but of the discretization. Since a central discretization is
used for the advective terms, the discretization reduces to a first order upwind discretization for
mesh P'eclet numbers equal to 2.
To compare the S-S, N-S, R-S and N-D methods again, we use the same averaging procedure
as described in Section 4. The average convergence rate is computed over some ranges of flow
magnitude and over all flow angles. Tables 3 and 4 show the results. The results are similar to
the theoretical results of Section 4. Also, the Neumann-Schwarz method performs best for low
flow magnitudes in comparison to the Schwarz-Schwarz and Neumann-Schwarz methods for both
cell-centered and vertex-centered. For larger flow magnitudes, the differences are almost negligible.
Note that the differences between the methods are even smaller than in the theoretical analysis.
This is because the weak periodic boundary conditions in the theoretical analysis were replaced by
angle
angle
R/C
experimental theoretical
Figure
3: Experimental and theoretical convergence factors for the GMRES accelerated Neumann-
Schwarz algorithm
P'eclet range S-S N-S R-S
Table
3: Average experimental convergence rates and corresponding number of iterations (in
brackets) to solve with relative accuracy of 10 \Gamma4 , for different mesh P'eclet ranges averaged over
all flow angles. Cell-centered discretization.
stronger boundary conditions in the experiments. Similar to the theoretical results of Section 4,
the Neumann-Dirichlet method of [19] (vertex-centered) shows almost identical convergence rate
as the Neumann-Schwarz method for low flow magnitudes and shows a slightly worse convergence
behavior for larger flow magnitudes.
5.2 Recirculating flow
As an example, we investigate a uniform flow problem, for which domain decomposition in general
converges slower than for simple uniform flow problems. The problem is defined by k
a 2 (x;
2 Sparse is a public domain direct solver available from netlib@ornl.gov
P'eclet range S-S N-S R-S N-D
Table
4: Average theoretical convergence rates and corresponding number of iterations (in brack-
ets) to solve with relative accuracy of 10 \Gamma4 , for different mesh P'eclet ranges averaged over all flow
angles. Vertex-centered discretization.
on (x; y)
controls the angle of flow across the interfaces. Two
decompositions of the domain are considered: the first in only two blocks with a vertical interface
at and the second into 2 \Theta 2 blocks with interfaces at uniform grid of
on\Omega is used combined with a cell-centered discretization.
Table
5 lists the results for the accelerated algorithm. Good convergence factors are obtained
and, the algorithm is quite insensitive to the direction of the flow on the interface and to the
type of coupling condition. The Robin-Schwarz method provides only a small improvement with
respect to Neumann-Schwarz, but this difference is so small that it does not always show up in
the iteration count.
blocks
blocks
Table
5: Experimental convergence rate ae(m) (iteration counts in brackets) for the recirculation
problem for increasing obliqueness, cell-centered discretization.
The differences in number of iterations (work) are very small for the three coupling strate-
gies. The relative differences in the number of iterations are even smaller when the number of
subdomains is increased from 2 to 4.
5.3 Further remarks on robustness
Further numerical experiments in [7] have shown that the GMRES accelerated algorithm is quite
insensitive to the coupling strategy used. The experiments in [7] also investigate the influence on
convergence rate of the types of external boundary conditions and of adding cross diffusion terms
k 12 . Furthermore, some experiments investigate the effect of variations in the ordering of blocks
and refinement within the subdomains. All these experiments show that the accelerated algorithm
is rather insensitive to these factors. In particular with respect to refinement this is promising for
applications to complicated flow problems.
6 Conclusions
We have investigated three domain decomposition methods, namely: the Schwarz-Schwarz, Neumann-
Schwarz and Robin-Schwarz algorithms. The algorithms were accelerated by a GMRES Krylov
subspace method. Assuming accurate solution of subdomain problems, the dimension of the vector
length in GMRES was reduced significantly by introducing the interface equations. This makes
the overhead of Krylov subspace acceleration negligible and enables the solution of large complex
CFD problems.
The GMRES Krylov subspace acceleration procedure can be implemented easily on top of
an already implemented unaccelerated domain decomposition algorithm, by repeatedly calling the
subroutine that performs a single Schwarz domain decomposition iteration with given initial guess.
This is of major importance for the implementation in complex CFD codes.
The theoretical and experimental convergence rates agree reasonably well. The experiments
show that for low flow magnitudes, the Neumann-Schwarz methods can provide a reduction in the
number of iterations of at most a factor 2. For large flow magnitudes, the differences between the
methods are less significant. The Robin-Schwarz and Schwarz-Schwarz methods are comparable in
convergence rates for both low and high flow magnitudes. The Neumann-Dirichlet method of [19],
has convergence rate similar to the Neumann-Schwarz method except for large flow magnitudes
for which it requires more iterations.
The differences in convergence rates found experimentally are typically less than the predicted
convergence rates. This effect is even stronger when non-uniform flow fields are used. For the
recirculating flow problem, the Robin-Schwarz (R-S) method has a slightly better convergence rate
than than the Neumann-Schwarz (N-S) and Schwarz-Schwarz (S-S) algorithms. The differences
in number of iterations (amount of work) between the S-S, N-S and R-S methods are very small:
in the number of iterations the difference is not significant at all. The differences between the
S-S, N-S, R-S methods are even less when the number of subdomains is increased from 2 to 4.
Further numerical experiments in [7] show these conclusions to be true for a larger number of test
problems: the method is reasonably robust with respect to coupling conditions, grid refinement,
velocity field and external boundary conditions.
Our experiments show that varying the type of interface conditions, depending of flow magnitude
and angle, in general gives only a moderate reduction in iteration count. In the experiments,
at most a reduction factor 2 was observed. Such limited reduction factors in general have a small
effect on total computing time. This is because, especially with complex CFD applications, solving
the system of equations may take only a small portion of the total computing time.
Possibly, a more detailed study of interface conditions will lead to more significant reductions
in iteration counts. In particular, this is interesting for limiting cases, such as low-speed (Stokes)
or high-speed (Euler) flows. For example, in [25, 24] good convergence for the unaccelerated
algorithm is obtained for such problems by optimizing interface conditions. Further research is
necessary to determine whether such a conclusion also holds for the accelerated algorithm.
A disadvantage of the methods described in [25, 24] is that convergence seems to depend
sensitively on the coupling parameters. Another problem is that these methods are in general
difficult to implement, especially in complex CFD codes. Besides that, one can imagine that
the complexity of such methods increases with the complexity of the CFD code: for instance,
extensions from two to three dimensions, adding new models and turbulence modeling. This
property makes the CFD software more difficult to maintain.
These are the reasons that we have omitted optimization of interface conditions and have chosen
to start with a simple Schwarz-Schwarz domain decomposition method for the incompressible
Navier-Stokes equations. We intend to study the parallel aspects of the present method and to
generalize the method to the full incompressible Navier-Stokes equations in general coordinates
on staggered grids, see [6, 5].
Acknowledgements
The authors would like to thank P. Wesseling for many valuable discussions on this work.
--R
Domain decomposition algorithms of Schwarz type
Iterative methods for the solution of elliptic problems on regions partitioned into substructures
To overlap or not to overlap: a note on a domain decomposition method for elliptic problems
A parallel domain decomposition algorithm for the incompressible Navier-Stokes equations
Schwarz domain decomposition for the incompress- sible Navier-Stokes equations in general coordinates
A domain decomposition method for the advection-diffusion equation
The construction of preconditioners for elliptic problems by substructuring I
A comparison of some domain decomposition and ILU preconditioned iterative methods for nonsymmetric elliptic problems
On the relationship between overlapping and nonoverlapping domain decomposition methods
Some recent results on Schwarz type domain decomposition algorithms
An iterative procedure with interface relaxation for domain decomposition methods
Experiences with domain decomposition in three di- mensions: Overlapping Schwarz methods
Iterative solution of large sparse systems of equations
Multigrid on composite meshes
Aerodynamics applications of Newton-Krylov-Schwarz solvers
Iterative solution of elliptic equations with refinement: The two-level case
A relaxation procedure for domain decomposition methods using finite elements
Benchmark solutions for the incompressible Navier-Stokes equations in general coordinates on staggered grids
GMRES: a generalized minimal residual algorithm for solving non-symmetric linear systems
A domain decomposition method for incompressible flow
Domain decomposition methods in computational mechanics
Local coupling in domain decomposition
The superlinear convergence behaviour of GMRES
Some Schwarz methods for symmetric and nonsymmetric elliptic problems
Schwarz and Schur: a note on finite-volume domain decomposition for advection-diffusion
A pressure-based composite grid method for the Navier-Stokes equations
--TR
GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems
The construction of preconditioners for elliptic problems by substructuring. I
Iterative methods for the solution of elliptic problems on regions partitioned into substructures
Multigrid on composite meshes
To overlap or not to overlap: a note on a domain decomposition method for elliptic problems
A relaxation procedure for domain decomposition methods using finite elements
Generalized Schwarz splittings
On the relationship between overlapping and nonoverlapping domain decomposition methods
A domain decomposition method for incompressible viscous flow
A pressure-based composite grid method for the Navier-Stokes equations
The superlinear convergence behaviour of GMRES | krylov subspace method;advection-diffusion equation;neumann-dirichlet method;schwarz alternating method;domain decomposition;krylov-schwarz algorithm |
271759 | A Parallel Grid Modification and Domain Decomposition Algorithm for Local Phenomena Capturing and Load Balancing. | Lion's nonoverlapping Schwarz domain decomposition method based on a finite difference discretization is applied to problems with fronts or layers. For the purpose of getting accurate approximation of the solution by solving small linear systems, grid refinement is made on subdomains that contain fronts and layers and uniform coarse grids are applied on subdomains in which the solution changes slowly and smoothly. In order to balance loads among different processors, we employ small subdomains with fine grids for rapidly-changing-solution areas, and big subdomains with coarse grids for slowly-changing-solution areas. Numerical implementations in the SPMD mode on an nCUBE2 machine are conducted to show the efficiency and accuracy of the method. | Introduction
Grid refinement methods have been proved to be essential and efficient in solving large-scale problems
with localized phenomena, such as boundary layers or wave fronts. For many engineering
problems, however, this still leads to large-size linear systems of algebraic equations, which can not
be solved easily even on today's largest computing systems.
Domain decomposition methods have been extensively investigated recently since they provide
a mechanism for dividing a large problem into a collection of smaller problems; each of them can
be solved sequentially or in parallel, on a workstation or a processor with moderate computing
capability.
In this work, we investigate the possibility of combining domain decomposition methods and
grid refinement techniques. We first divide the original physical domain into a collection of subdo-
mains, and then apply fine grids in subdomains that contain local phenomena and coarse grids in
This paper appeared in: Journal of Scientific Computing, 12(1997), 99-117.
y Department of Mathematics, Wayne State University, Detroit, Michigan 48202. Email: yang@math.wayne.edu,
dyang@na-net.ornl.gov
subdomains in which the solution changes slowly. For the purpose of load balancing among different
processors, when solved on a parallel computer or a distributed computing system, the size of the
subdomains that contain local phenomena should be kept small, while the subdomains in which the
solution changes smoothly should be large. In our implementation, we try to keep approximately
the same number of degrees of freedom for each subdomain.
Our results are experimental and for one-dimensional problems. Multi-dimensional problems
can be considered similarly but with more complexity. In section 2, we will describe the domain
decomposition method with grid modification, and in section 3, we will conduct some numerical
experiments.
domain decomposition method with local grid refine-
ment
In this section, we describe a domain decomposition method which will be combined with grid
refinement techniques.
2.1 The differential problem
Consider the model problem in 1-D:
@x
@x
@x
(1)
@-
(2)
@-
where - is the outward normal, and - are given constants.
Let the
decomposed into K nonoverlapping subdomains satisfying
Then the system (1)-(3) is equivalent to the following split problem:
@x
@x
@x
@-m
@-
@-
is the outward normal to the boundary of
subdomain\Omega k . In [5], Lions proposed a non-overlapping
Schwarz algorithm which can be heuristically stated as follows. Choose the initial
guess
k satisfying the given boundary condition (2)-(3). For the sequence u n
k is
constructed such that
\Gammaq @u n+1
@-m
@-
where - km is a parameter which can be chosen to speed up the convergence, and L is the given
elliptic operator.
Despr'es [1], Douglas et al [2], and Kim [4] have considered discretizations of this procedure
by the hybridized mixed finite element method and the finite difference method and applied the
procedure to wave equations. Mu and Rice [7] considered collaborating PDE solvers in the context
of domain decompositions.
2.2 The finite difference discretization
Assume that uniform grid with size h k is used in
subdomain\Omega k and that there are d k grid points
in
\Omega k with the i-th grid point being denoted by x k;i , where
the interface point between
and\Omega k+1 is x k;d k (= x k+1;1 1: The
endpoints
of\Omega are x We denote, for example,
In the following, we describe two schemes based on finite difference discretization, which differ
from each other in the way of discretizing the convection term.
Scheme 1:
For
using the exterior point x we have the second
order finite difference scheme and the first order interface condition at x k;1 (see figure 1)
\Gammaq k;1
k\Gamma1;d
k\Gamma1;d
Note that q
only if q is continuous at x k;1 (= x k\Gamma1;d
x
x
Thus (16) represents a general form of continuity of function value and flux.
x k\Gamma1;1
a
x k\Gamma1;2
a a
x k\Gamma1;d
\Omega
x k;1
Figure
1: Grid points in
subdomain\Omega k , and interface between
and\Omega
Eliminating
k;0 from (15) and (16) we have the equation for the first point in
subdomain\Omega k .
For interior points of
subdomain\Omega k , we apply the second order finite difference scheme. For the
last point of
subdomain\Omega k , we have
Similarly, in (17)-(18) the point x k;d k is an imaginary one. The value u n
k;d k +1 in
(17)-(18) will be eliminated, under the assumption that q k;d k
Thus we have the following linear system for
k\Gamma1;d
where for the first
the equation (19) should be changed to
and for the last
subdomain\Omega K , the equation (21) should be changed to
K;d K
Here we assumed that h k has been chosen such that
It is easy to see that when the sequence u n
k;j converges as n !1, the limit, denoted by u k;j , will
satisfy the standard global finite difference scheme for problem (1)-(3), and thus u
O(h) in the L 1 norm, where g.
Scheme 2:
In this scheme we employ the first order finite difference approximation to the convection term
on each interface. To be more specific, applying the first order finite difference to the convection
term in (15), we have
Similarly we replace (17) by
Then the equation (19) should be changed to
k\Gamma1;d
k\Gamma1;d
And the equation (21) should be changed to
All other equations remain unchanged. Thus Scheme 2 consists of equations (27), (20), (28), (22),
and (23). For this scheme, the assumption (24) is not required.
2.3 The matrix form
We now rewrite Scheme 1 into a matrix form which may be helpful when implementing the scheme.
Define the tridiagonal matrices,
. ff k;d k
ff K;d
and define the block tridiagonal matrix
where
Then the system (19)-(21) can be rewritten into
K;d K
Scheme 2 can also be put in the matrix form (39), but with different definitions of G and H.
2.4 Convergence analysis and the relaxation parameter
Let ae(A) denote the spectral radius of a matrix A. The relaxation parameters - k;m will be chosen
such that ae(G Let the elements of D \Gamma1
k be denoted by y k
i;j , that is, denote D \Gamma1
(y
We will show that such - k;m can be obtained by the following relation:
y
if we require that -
Note that the formula (41) is valid for both Scheme 1 and Scheme 2. Following Tang [8], we
give a proof of (41) in case of three subdomains, that is, deleting all the columns except
the eight nonzero ones and corresponding rows of G \Gamma1 H, we see that ae(G
y (2)
y (2)
y (2)
y (2)
Let
Then
y (2)
z
z
y (2)
where
z
z
z
By virtue of (43) we have that ae(G
z
z
In order to let ae(G choose z 0: In view of (35)-(38) for Scheme 1
and similar formulas for Scheme 2, we have
y (1)
y (1)
y (2)
y (2)
This finishes the proof of (41) in case
In order to find the optimal parameters by (41), we need to know the ratio of the two elements
of the inverse of the matrix D k . Assume that - k\Gamma1;k has been found, the ratio can be determined
from the relation
y
y
Indeed, an incomplete (omitting the last step) LU-factorization to D k leads to : D
L has diagonal elements 1, and U has elements U (k)
Now the (d k \Gamma 1)\Gammath row
of (46) gives us that
U
y
In view of (41) we have
U
Thus optimal parameters - k;k+1 can be found recursively from (47) by incomplete LU-decompositions
so that the iterative matrix of the system (39) has spectral radius 0. This also gives the convergence
for both Scheme 1 and Scheme 2. Note that the matrix D k is tridiagonal. Thus the ratio
U
can be obtained by a single (not nested) loop (with less than 3d k floating point
operations), and an incomplete LU decomposition is not needed. In our implementation, each processor
(except the last one) computes the parameter at the right-hand boundary of the subdomain.
For constant coefficients, Tang [8] used (41) to find optimal parameters by directly computing the
inverse of the matrices D k . Note that Tang's analysis in case of minimum subdomain overlapping
is equivalent to the nonoverlapping case presented here. Kim [4] applied the procedure (47) to
wave problems. The analysis here extends Tang and Kim's to nonuniform grids with an explicit
treatment of discontinuous diffusion coefficients.
3 Numerical experiments
In this section, we implement our grid refinement and domain decomposition strategy on the
nCUBE2, a MIMD parallel computer with distributed memory, located in Department of Computer
Science at Purdue. It is observed that Scheme 2 gives a little more accuracy than Scheme 1
and does not have the restriction as described in (24). Thus we just report our results for Scheme
2. It should be noted that Scheme 1 gives second order approximation to the convection term as
well the diffusion term and provides a natural approach to discretizing the differential equation.
3.1 Hypercube machines
We now describe briefly hypercube machines and their properties that have influence on our imple-
mentation. By definition, an r-dimensional hypercube machine has nodes and r2 r\Gamma1 edges.
Each node corresponds to an r-bit binary string, and two nodes are linked with an edge if and
only if their binary strings differ in precisely one bit. See Figure 2. As a consequence, each node
is incident to r = log N other nodes, one for each bit position. In addition to a simple recursive
structure, the hypercube also has low diameter (log N) and high bisection width (N=2).
At each iteration of our algorithm, each node computes an error between the current solution
and the solution at the previous iteration on its subdomain. The maximum among the errors on all
the nodes need be found to decide if the iterative process should be stopped. In our implementation,
we first let each of the nodes whose binary string have the form i its value
to the node whose binary string is then let the node i 1
compute and keep the maximum of the two values. After this operation, we have only half of the
values to be dealt with, stored in nodes By the recursive structure of the
hypercube machine, we need to continue only log N steps to get the maximum value among all the
subdomain errors. The maximum is thus finally found on node 0. If this maximum is less than a
prescribed small number, node 0 then sends a message to every other node to stop. Otherwise the
iterative process continues until, at some later iteration, the maximum error is small.
Figure
2: Examples of interprocessor connection and data network for hypercube machines.
For example, consider a 2-dimensional hypercube, corresponding to Figure 2. At
the first step, nodes 10 and 11 send their values to nodes 00 and 01, respectively. Then node 01
computes and stores the maximum value among nodes 01 and 11, and node 00 computes and stores
the maximum value among nodes 00 and 10. At the second step, node 01 sends its updated value
to node 00, and node 00 computes and stores the maximum. This procedure can be efficiently
implemented using bit masks provided by the C programming language.
3.2 Parallel implementation
We will adopt the following stopping criterion for the iterative procedure:
k1 denotes the discrete L 1 norm. However, subdomain problems are solved by Gauss
elimination with banded storage. In all the computations below, we choose the initial guess to be
zero. When the stopping criterion (48) is satisfied, the procedure is stopped and the relative errors
in the L 1 norm between the iterative solution and the true solution are computed.
To test the accuracy of the method, consider
Example 1:
du
dx
The exact solution to the problem is
where the dimensionless quantity is the Peclet number. When B ?? 1, u(x) exhibits
a boundary layer of thickness O(B 1. In this example, we keep
different values for q.
Table
1: Numbers of iterations and L 1 norms of the errors between the true solution and the domain
decomposition solution for Example 1 with different Peclet numbers and grid sizes, implemented
on processors with subdomains.
Number of iterations 21 20 20 20
of GMDD 3.9E-2 4.7E-2 3.2E-2 3.4E-2
of FDM1 8.7E-2 2.5E-1 4.5E-1 5.8E-1
of FDM2 3.4E-2 3.5E-2 3.5E-2 3.5E-2
The domain is decomposed into 16 subdomains. Since the boundary layer is near the point
apply coarse grid h 1for the left 10 subdomains, fine grid h
is the
Peclet number, for the rightmost 4 subdomains, and ffor the middle two (the 11-th and
12-th, if counting from the left) subdomains. The sizes of the subdomains are arranged such that
each subdomain contains approximately the same number of degrees of freedom. Let H c ; H f ; and
Hm be the size of coarse, fine, and medium grid subdomains, respectively. Then
Solving the system we obtain the size of each subdomain. We denote this Grid Modification and
Domain Decomposition method by GMDD. To compare its accuracy, we also employ the second
order finite difference method without domain decomposition with
1. (denoted by FDM1) uniform grid such that the total number of unknowns is equal to that of
GMDD.
2. (denoted by FDM2) uniform fine grid with . The total number of unknowns for
FDM2 is much bigger than that for GMDD.
The results in Table 1 show that the domain decomposition solution is more accurate than the
global method with the same number of degrees of freedom, and as accurate as the global method
with a much larger number of degrees of freedom. For example, for the our grid
modification and domain decomposition method has only 80 degrees of freedom while the global
finite difference method with fine grid (FDM2 in Table 1) has 600 degrees of freedom. But the
accuracies of the two methods are about the same.
We now consider the following non-selfadjoint problem with variable coefficients.
Example 2:
@x
@x
@x
Table
2: Numerical results in the L 1 norm for Example 2 with different numbers of processors;
each processor takes care of one subdomain.
Number of processors 8
Number of iterations
Errors in L 1 norm 7.21E-2 6.26E-2 6.27E-2 5.19E-2
Table
3: Numerical results for Example 3 with discontinuous diffusion coefficient and different grid
sizes, in the case of 16 subdomains.
Number of iterations 78 79 79
The function f on the right hand side is chosen such that the exact solution to the problem is
The true solution has a boundary layer near the right boundary. The domain is decomposed into
different numbers of subdomains, with the right most three subdomains being discretized with grid
size 1, and the rest of the subdomains having grid size 1. The number of subdomains is equal
to the number of the processors in the implementation. The results for this example with different
numbers of processors are shown in Table 2. From Table 2 we know that the number of iterations
required for the procedure to stop is bigger than for problems with constant coefficients, see Table
1. It is interesting to observe that the accuracy improves as the number of subdomains increases.
This agrees with our computing experience in [9].
Then, we see how well this algorithm performs with discontinuous diffusion coefficients and
consider
Example 3:
@x
@x
@x
where
and the exact solution is chosen as:
Table
4: Numerical results for Example 4 with different numbers of processors; each processor takes
care of one subdomain.
Number of processors 8
Number of iterations 29 55 104 197
Errors in L 1 norm 2.99E-2 2.39E-2 2.37E-2 2.32E-2
We decompose the domain into 16 subdomains. The accuracy and number of iterations with
different grid sizes for Example 3 are shown in Table 3. From Table 3 we see that the number
of iterations and the errors are larger for discontinuous diffusion coefficients than for continuous
coefficients.
Finally, we consider another convection-diffusion problem
Example 4:
@x
@x
@x
The function f on the right hand side is chosen such that the exact solution to the problem is
The true solution has a boundary layer near the right boundary. The domain is decomposed into
different numbers of subdomains, with one third of the subdomains on the right hand side being
discretized with grid size h f = 1, two thirds of the subdomains on the left hand side having grid
size h c = 1, and one middle subdomain between them having grid size (h c )=2. The length
of each subdomain is chosen as in Example 1 such that each subdomain has approximately the
same number of degrees of freedom. The number of subdomains is still equal to the number of the
processors in the implementation. Numerical results for this example are shown in Table 4.
Concluding Remarks
We have implemented a non-overlapping Schwarz domain decomposition method with grid modification
for elliptic problems involving localized phenomena, such as fronts or layers. In order to
capture local phenomena and save computational work, we apply fine grids in subdomains that
contain fronts or layers, and coarse grids in other subdomains. However, when implemented on
parallel computing systems, the subdomain problems must have approximately the same computational
complexity so that loads on different processors or workstations can be balanced. This is
important for synchronization and achievement of a good speedup.
Though our implementation is for one-dimensional and steady problems, the methodology applies
to time-dependent and multi-dimensional problems. Time-dependent problems are usually
first discretized in time to elliptic problems at each time step, and then domain decomposition
algorithms can be applied. However, for advection-dominated transport problems, discretization in
space and time should be coordinated. For example, unwinding techniques can be incorporated in
the discretization process.
Nonoverlapping Schwarz domain decomposition methods have recently received a lot of atten-
tion, since its efficiency and elegant simplicity in implementation, great savings in computer storage
(in 3D, even small overlapping of subdomains could cause a lot more storage), and direct applicability
to transmission problems. In this direction, several other types of domain decomposition have
been considered [6, 11, 10, 12]. Their modifications can apply directly to domain decompositions
with cross points and with long and narrow subdomains. In [3], we successfully implemented a variant
of the methods [12] for selfadjoint and non-selfadjoint elliptic partial differential equations with
variable coefficients and full diffusion tensor, in the case of 100 subdomains with 81 cross points in
2-D. The variant also works well for long and narrow subdomains with length of a subdomain being
times as large as the width of the subdomain. In [9], the author considered a dynamic domain
decomposition method based on finite element discretization for two-dimensional problems. The
dynamic change of domain decompositions can provide a mechanism for capturing moving layers
and fronts, and for load balancing on different processors.
Acknowledgement
: The author would like to thank Professor John R. Rice for his valuable advice
and for providing the computing facility in Department of Computer Science at Purdue. He also
feels grateful to Professor Jim Douglas, Jr. and Dr. S. Kim for helpful discussions. The referee's
suggestions also improved the quality of the paper.
--R
Domain decomposition method and the Helmholz problem
Finite difference domain decomposition procedures for solving scalar waves
On the Schwarz alternating method III: a variant for nonoverlapping subdomains
A relaxation procedure for domain decomposition methods using finite elements
Modeling with collaborating PDE solvers: theory and practice
Different domain decompositions at different times for capturing moving local phenomena
A parallel iterative nonoverlapping domain decomposition procedure for elliptic problems
A parallel iterative domain decomposition algorithm for elliptic problems
A parallel iterative nonoverlapping domain decomposition method for elliptic interface problems
--TR
A relaxation procedure for domain decomposition methods using finite elements
Generalized Schwarz splittings
Different domain decompositions at different times for capturing moving local phenomena
--CTR
Daoqi Yang, Finite elements for elliptic problems with wild coefficients, Computational science, mathematics and software, Purdue University Press, West Lafayette, IN, 2002
J. R. Rice , P. Tsompanopoulou , E. Vavalis, Fine tuning interface relaxation methods for elliptic differential equations, Applied Numerical Mathematics, v.43 n.4, p.459-481, December 2002 | finite difference method;grid modification;parallel computing;domain decomposition method |
271778 | Constructing compact models of concurrent Java programs. | Finite-state verification technology (e.g., model checking) provides a powerful means to detect concurrency errors, which are often subtle and difficult to reproduce. Nevertheless, widespread use of this technology by developers is unlikely until tools provide automated support for extracting the required finite-state models directly from program source. In this paper, we explore the extraction of compact concurrency models from Java code. In particular, we show how static pointer analysis, which has traditionally been used for computing alias information in optimizers, can be used to greatly reduce the size of finite-state models of concurrent Java programs. | Introduction
Finite-state analysis tools (e.g., model checkers) can automatically
detect concurrency errors, which are often subtle
and difficult to reproduce. Before such tools can be applied
to software, a finite-state model of the program must be
constructed. This model must be accurate enough to verify
the requirements and yet abstract enough to make the
analysis tractable. In this paper, we consider the problem
of constructing such models for concurrent Java programs.
We consider Java because, with the explosion of internet
applications, Java stands to become the dominant language
for writing concurrent software. A new generation of programmers
is now writing concurrent applications for the first
time and encountering subtle concurrency errors that have
heretofore plagued mostly operating system and telephony
switch developers. Java uses a monitor-like mechanism for
thread synchronization that, while simple to describe, can
be difficult to use correctly (a colleague teaching concurrent
Copyright c
fl1998 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or direct commercial advantage
and that copies bear this notice and the full citation on the first page.
Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy
otherwise, to republish, to post on servers, or to redistribute to lists,
requires prior specific permission and/or a fee.
Java programming found that more than half of the students
wrote programs with nested monitor deadlocks).
Ideally, an analysis tool could extract a model from a
program and use the model to verify some property of the
program (e.g., freedom from deadlock). In practice, extracting
concurrency models is difficult to automate completely.
In order to obtain a model small enough for a tractable
analysis, an analyst must assist most existing tools by specifying
what aspects of the program to model. In particular,
the representation of certain variables is often necessary to
make the model sufficiently accurate, but these variables
must often be abstracted or restricted to make the analysis
tractable. Although a model that restricts the range of
a variable does not represent all possible behaviors of the
program and thus cannot technically be used to verify the
program has a property, the conventional wisdom is that
most concurrency errors are present in small versions of a
system[6, 9], thus these models can still be useful for finding
errors (testing).
Most previous work on concurrency analysis of software
has used Ada [7, 13, 12, 2, 3, 8]. Although some
aspects of these methods can also be applied to Java
programs, the Java language presents several new chal-
lenges/opportunities:
1. Due to the object-oriented style of typical Java pro-
grams, most of the variables that need to be represented
are fields of heap allocated objects, not stack
or statically allocated variables as is common in Ada.
2. Java threads must be created dynamically, thus is it
impossible (in general) to determine how many threads
a program will create. Although Ada tasks may be
created dynamically, many concurrent Ada programs
contain only statically allocated tasks.
3. Java has a locking mechanism to synchronize access to
shared data. This can be exploited to reduce the size
of the model.
The main contribution of this paper is to show how static
pointer analysis can be used to reduce the size of finite-state
models of concurrent Java programs. The method employs
virtual coarsening [1], a well-known technique for reducing
the size of concurrency models by collapsing invisible actions
(e.g., updates to variables that are local or protected by a
lock) into adjacent visible actions. The static pointer analysis
is used to construct an approximation of the run-time
structure of the heap at each statement. This information
can be used to identify which heap objects are actually local
to a thread, and which locks guard access to which variables.
This paper is organized as follows. We first provide a
brief overview of Java's concurrency features in Section 2.
Section 3 defines our formal model (transition systems) and
Section 4 explains how the size of such models can be reduced
with virtual coarsening given certain information on
run-time heap structure is available. We then explain how
to collect this information using static pointer analysis in
Section 5. Section 6 shows how to use the heap structure
information to apply the reductions. Finally, Section 7 concludes
Concurrency in Java
Java's essential concurrency features are illustrated by
the familiar bounded buffer system shown in Figure 1. In
Java, threads are instances of the class Thread (or a subclass
thereof) and are created using an allocator (i.e., new). The
constructor for Thread takes as a parameter any object implementing
the interface Runnable, which essentially means
the object has a method run(). Once a thread is started by
calling its start() method, the thread executes the run()
method of this object. Although threads may be assigned
priorities to control scheduling, in this paper we assume all
(modeled) threads have equal priority and are scheduled ar-
this captures all possible executions.
In the example, the program begins with the execution
of the static method main() by the main thread. This
creates an instance of an IntBuffer, creates instances of
Producer and Consumer that point to this IntBuffer, creates
instances of Thread that point to the Producer and
Consumer, and starts these threads, which then execute
the run() methods of the producer and consumer. The
producer and consumer threads put/get integers from the
shared buffer.
There are two types of synchronization in the bounded
buffer problem. First, access to the buffer should be mutually
exclusive. Every Java object has an implicit lock.
When a thread executes a synchronized statement, it must
acquire the lock of the object named by the expression before
executing the body of the statement, releasing the lock
when the body is exited. If the lock is unavailable, the
thread will block until the lock is released. Acquiring the
lock of the current object (this) during a method body is
a common idiom and may be abbreviated by simply placing
the keyword synchronized in the method's signature.
The second type of synchronization involves waiting:
callers of put() must wait until there is space in the buffer,
callers of get() must wait until the buffer is nonempty. On
entry, the precondition for the operation is checked, and, if
false, the thread blocks itself on the object by executing the
wait() method, which releases the lock. When a method
changes the state of the object in such a way that a pre-condition
might now be true, it executes the notifyAll()
method, which wakes up all threads waiting on the object
(these threads must reacquire the object's lock before returning
from wait()).
3 Formal Model
We model a concurrent Java program with a finite-state
transition system. Each state of the transition system is
an abstraction of the state the Java program and each transition
represents the execution of code transforming this abstract
state. Formally, a transition system is a pair (S; T )
where
Heap Structure:
Thread
Thread Consumer
Producer
IntBuffer int[]
data
buf
buf
public class IntBuffer -
protected int [] data;
protected int count = 0;
protected int
public IntBuffer(int capacity) -
new int[capacity]; // allocate array
// data.length == size of array (capacity)
public void put(int x) -
synchronized (this) -
while (count == data.length)
buffer not full
data[(front
if (count == 1) // buffer not empty
public int get() -
synchronized (this) -
while (count ==
wait(); // wait until buffer not empty
int
if (count == data.length - 1)
notifyAll(); // buffer not full
return x;
public class Producer implements Runnable -
protected int next = 0; // next int to produce
protected IntBuffer buf;
public void run() -
while (true) -
public class Consumer implements Runnable -
protected IntBuffer buf;
public void run() -
while (true) -
int
public class Main -
public static void main(String [] args) -
new
new Thread(new Producer(buf)).start();
new Thread(new Consumer(buf)).start();
Figure
1: Bounded Buffer Example
Transformations:
State Variables:
loc1; loc2: location of thread 1,2
lock: state of lock (0 is free, 1 is taken)
x: value of program variable x (initially 1)
Thread 1:
1: Lock
2:
3: Unlock
4: .
Thread 2:
5: Lock
.
State Space (fragment): state = (loc1; loc2; lock; x)
Figure
2: Example of Transition System
Dn is a set of states. A state is an
assignment of values to a finite set of state variables
ranges over a finite domain
is a transition relation. T is defined by
a set of guarded transformations of the
state variables, where called the
guard, is a boolean predicate on states, and
S, called the transformation, is a map from states to
states:
When g i (s) is true, we sometimes write t i
A trace of a transition system is a sequence of transitions:
such that (s
The method of constructing the transition system representing
a Java program is similar to the method presented in
[5] for constructing the (untimed) transition system representing
an Ada program. State variables are used to record
the current control location of each thread, the values of key
program variables, and any run-time information necessary
to implement the concurrent semantics (e.g., whether each
thread is ready, running, or blocked on some object). Each
transformation represents the execution of a byte-code instruction
for some thread. A depth-first search of the state
space can be used to enumerate the reachable states for anal-
ysis; at each state, a successor is generated for each ready
thread, representing that thread's execution. The small example
in Figure 2 gives the flavor of the translation.
The Java heap must also be represented. We bound the
number of states in the model by limiting the number of
instances of each class (including Thread) that may exist
simultaneously. For this paper, we assume these limits are
provided by the analyst. If class C has instance limit kC ,
when a program attempts to allocate an instance of class C
at a point where kC instances are still accessible (Java uses
garbage collection), the transition system goes to a special
trap state-the model does not represent the behavior of the
program beyond this point. As discussed in the introduc-
tion, such a restricted model can still be useful for finding
errors.
Consider again the bounded buffer example in Figure 1.
We could generate a restricted model of this program by
representing all the variables but restricting their ranges.
By restricting the variables representing the contents of the
buffer to f0; 1g and the variables representing the size of
the buffer to f0; 1; 2g, we would obtain a very restricted but
interesting model of the program (i.e., one that would likely
contain any concurrency errors).
Reductions
The transition system (S; T ) produced by the method
sketched in Section 3 is much larger than required for most
analyses and is often too large to construct. Instead, we construct
a reduced transition system (S
and use this for the analysis. We reduce the size of the
transition system using virtual coarsening [1], a well-known
technique for reducing the size of concurrency models by
amalgamating transitions. Since we are using an interleaving
model of concurrency, reducing the number of transitions
in each thread greatly reduces the number of possible states
by eliminating the interleavings of the collapsed transition
sequences.
The reduced transition system is constructed by classifying
each transformation defining T as visible or invisible and
then composing each (maximal) sequence of invisible transformations
in a given thread into the visible transformation
following that sequence. The transitions and states generated
by these composed transformations form (S ; T ). For
example, in Figure 2, we might replace transformations t2
and t3 with a single transformation t2 ffi t3 that updates x and
releases the lock; we could then eliminate control location 3
from the domain of loc1.
We assume the requirement to be verified (tested) is specified
as a stuttering-invariant formula f in linear temporal
logic (LTL) [11], the atomic propositions of which are of the
is a state variable and d i 2 D i . Statement
s is true in state s of
transition system (S; T ). To be useful, the reduced transition
system should:
1. Be equivalent to the original transition system for the
purpose of verification. Specifically, for all s 2 S
s
2. Be constructible directly from the program, without
first constructing (S; T ).
Note that the reduced model constructed is specific to the
thus the reduction must be repeated for each
property verified.
We classify a transformation as invisible and compose
it with its successor transformation(s) only if we can show
that this cannot change the truth value of f . An
is constructed by applying temporal operators to state
predicates, which are boolean combinations of atomic propo-
sitions. Let pm be the state predicates of f . An
f-observation in a state s denoted Pf (s) is a vector of m
booleans giving the value of in s. A transformation
f-observable if it can change an f-observation:
Each trace (s0 defines a sequence of f -
observations which we reduce by combining
consecutive identical (i.e., stuttered) f-observations.
It is easy to show that the set of these reduced f-observation
sequences determines the truth value of f in s0 .
Therefore, to satisfy condition 1 above, we construct the
reduced transition system such that it has the same set of
reduced f-observation sequences as the original transition
system. To satisfy condition 2, we must do this without
constructing (S; T ); we must classify transformations as vis-
ible/invisible based on information obtained from the program
code. Below, we give two cases in which transformations
representing Java code can be made invisible. In both
cases, we need information about the structure of the heap
at run-time to apply the reduction. We show how to collect
this information in Section 5.
4.1 Local Variable Reduction
Some state variables are accessed only when a particular
thread is running. For example, some program variables are
locally scoped to a particular thread by the language seman-
tics. Also, the state variable recording the control location
of a thread is accessed only by that thread. Transformations
that access exclusively such state variables may be made invisible
provided they are not f-observable.
To understand why, consider transformation t in Figure
3. Assume t is not f-observable and accesses only variables
local to the thread whose code it represents. Let t 0
represent code in this same thread that can be executed
immediately following the code represented by t. We can
replace t and t 0 with t ffi t 0 (if there are multiple successors
to t, say t
replace t and t 0
To prove that this
reduction does not change the truth value of f , we must
show that the resulting transition system has the same set
of reduced f-observation sequences as the original transition
system.
For any state s1 in which t is enabled, there may be one
or more sequences of transformations representing
the execution of code from other threads (i.e., not the
thread of t). Combining t and t 0 eliminates traces in which
occurs between t and t 0 . This does not eliminate
any reduced f-observation sequences, however, since executing
must produce the same reduced f -
observation sequence as executing t before
t accesses only variables that t1
independent of commutes: for any state s1
in which both t and are enabled,
t is not f-observable,
the trace obtained by executing t; must have the
same reduced f-observation sequence as the trace obtained
by executing
To use this technique, we would like to determine what
variables are local to a particular Java thread (i.e., can only
be referenced by that thread). A program variable is local
to a thread if:
1. The variable is stack allocated (i.e., is declared in a
method body or as a formal parameter).
2. The variable is statically allocated and referenced by
at most one thread.
3. The variable is heap allocated (i.e., an instance variable
of an object) and the object is accessible only from
one thread. For example, the variable next of class
Before reduction
After reduction
Figure
3: Reduce by combining t and t 0
Producer in Figure 1 is accessible only by the producer
thread.
Case 1 is trivial to detect. Case 2 is more difficult due to
the dynamic nature of thread creation, though the following
conservative approximation is reasonable: a static variable
may be considered local if it is accessed only by code
reachable 1 from main(), or only by code reachable from a
single run() method of a class that is passed to a Thread
allocator at most once (the allocator is outside any loop or
recursive procedure). For example, if the variable next were
a static member of class Producer, then since that variable is
accessed only by code reachable from the Producer's run()
method, and since there is only one instance of Producer
created, this analysis could determine next is local to the
producer thread. Case 3 is the most difficult. Clearly if the
object containing the variable is accessible only from stack
or statically allocated variables that themselves are local to
a specific thread (cases 1 and 2), then the heap allocated
variable is also local to that thread, but determining this
requires information about the accessibility of heap objects
at run-time.
4.2 Lock Reduction
We propose another technique for virtual coarsening based
on Java's locking mechanism. A transformation that updates
a variable x of an instance of class C may be made
invisible provided it is not f-observable and there exists an
object 'x such that any thread accessing x is holding the
lock of 'x ('x may be the instance of C containing x). We
say the lock on 'x protects x. The intuition behind this reduction
is that, even though other threads may access x,
they cannot do so until the current thread releases the lock
on 'x , thus any changes to x need not be visible until that
lock is released.
The correctness of this reduction can be shown using
the diagram in Figure 3. The reasoning is similar to that
for the local variable reduction. Assume the only non-local
variables t accesses are those that are protected by locks.
The thread whose code t represents must hold the locks for
these variables at s1 . Therefore, although there exist transformations
representing code in other threads that accesses
these variables, such transformations cannot be in the sequence
since the other thread would block before
reaching such transformations.
Assuming f does not reference the state of a shared ob-
ject, this reduction allow us to represent complex updates
to such objects with two transformations. In the bounded
statement s is reachable from a statement s 0 if there exists a
path in the program's control flow graph from s 0 to s (i.e., a thread
might execute s after executing s 0 ).
Heap Structure:
Object
Programmer
Object
salaryLock
hoursLock
public class Programmer -
protected long hours = 80;
protected double salary = 50000.0;
protected Object hoursLock = new Object();
protected Object new Object();
public void updateHours(long newHours) -
synchronized
public void updateSalary(double newSalary) -
synchronized
Figure
4: Example of Splitting Locks
buffer example, each execution of put() or get() updates
several variables, yet we can represent each call with a transformation
that acquires the lock and a transformation that
atomically updates the state of the buffer and releases the
lock.
In order to apply these reductions, we need to determine
which locks protect which variables. Clearly if an instance
variable of a class is only accessed within synchronized methods
of that class, then the variable is protected by the lock of
the object in which it is contained. Nevertheless, it is common
for variables to be protected by locks in other objects.
For instance, in the bounded buffer example, the array object
referenced by instance variable data is protected by the
lock on the enclosing IntBuffer object. This very common
design pattern is known as containment [10]: an object X is
conceptually 2 nested in an object Y by placing a reference
to X in Y and accessing X only within the methods of Y.
Another common design pattern in which locks protect
variables in other objects is splitting locks [10]. A class might
contain independent sets of instance variables that may be
updated concurrently. In this case, acquiring a lock on the
entire instance would excessively limit potential parallelism.
Instead, each such set of instance variables has its own lock,
usually an instance of the root class Object. An example is
given in Figure 4; two threads could concurrently update a
Programmer's hours and salary.
In general, determining which locks protect which variables
requires information about the structure of the heap
at run-time. Collecting this information is the topic of the
next section.
5 Reference Analysis
In this section, we describe a static analysis algorithm that
constructs an approximation of the run-time heap struc-
ture, from which we can collect the information needed for
the reductions. Understanding run-time heap structure is
an important problem in compiler optimization since accurate
knowledge of aliasing can improve many standard
Java does not allow physical nesting of objects.
optimizations. One common approach is to construct a directed
graph for each program statement that represents a
finite conservative approximation of the heap structure for
all control paths ending at the statement. Several different
algorithms have been proposed, differing in the method of
approximation.
Our algorithm is an extension of the simple algorithm
given by Chase et al [4], which uses this basic approach. We
extend Chase's algorithm in three ways. First, we handle
Chase's algorithm is for sequential code.
Second, we distinguish current and summary heap nodes;
this allows us to collect information on one-to-one relationships
between objects. Third, we handle arrays.
5.1 The Program
For the reference analysis, we represent a multi-threaded
program as a set of control flow graphs (CFGs) whose nodes
represent statements and whose arcs represent possible control
steps. There is one CFG for each thread: one CFG for
the main() method and kC identical CFGs for each run()
method of class C (recall kC is the instance limit for class
C). In this paper, we do not handle interprocedural anal-
ysis. We assume all procedure (method) calls have been
inlined; this limits the analysis to programs with statically
bounded recursion. Polymorphic calls can be inlined using a
switch statement that branches based on the object's type
tag; since this tag is not modeled in our analysis, all methods
to which the call might dispatch will be explored.
In our algorithm, we require the concept of a loop block.
For each statement s, let loop(s) be the innermost enclosing
loop statement s is nested within (or null if s is not in any
loop). The set fs 0 jloop(s loop(s)g is called the loop block
of s.
Our analysis models only reference variables and values.
References are pointers to heap objects. A heap object contains
a fixed number of fields, which are references to other
heap objects (we do not model fields not having a reference
type). For class instances, the number of fields equals the
number of instance variables with a reference type (possibly
zero). For arrays, the number of fields equals zero (for an
array of a primitive type) or one (for an array of references);
in the latter case all array elements are represented by a
single field named [].
In Java, references can be manipulated only in four ways:
the new allocator returns a unique new reference, a field can
be selected, a field can be updated, and references can be
checked for equality (this last operation is irrelevant to the
analysis).
5.2 The Storage Structure Graph
A storage structure graph (SSG) is a finite conservative approximation
of all possible pointer paths through the heap
at a particular statement s. There are two types of nodes
in an SSG: variable nodes and heap nodes. There is one
variable node for each statically allocated reference variable
and for each stack allocated reference variable in scope at
s. There are one or two heap nodes for each allocator A
(e.g., new C()) in the program, depending on the location
of statement s in relation to A. If s is within the loop block
of A or in a different thread/CFG than A, the SSG for s
contains a current node for A, which represents the current
instance of class C-the instance allocated by A in the current
iteration of A's loop block. For all statements s, the
SSG for s contains a summary node for A, which represents
x
f
y
f
z
f
1:C
2:C
2:C*
class C {
// class with two fields
{ C x,y,z; // stack variables
// Method body
while (.) {
1:
2:
3:
4:
5:
Figure
5: SSG for Statement 4
the summarized instances of class C-all instances allocated
by A in completed iterations of A's loop block.
Each heap node has a fixed number of fields from which
edges may be directed. Each edge in the SSG for a statement
s represents a possible reference value at s. Edges
are directed from variable nodes and fields of heap nodes
towards heap nodes. In general, more than one edge may
leave a variable node or heap node field since different paths
to s may result in different values for that reference. Even
if there is only one path to s, there may be multiple edges
leaving a summary node or array field since such nodes represent
multiple variables at run-time.
An example SSG is shown in Figure 5. We elide parts of
the code not relevant to the analysis with . and prepend
line numbers to simple statements for identification. Variable
nodes are shown as circles, heap nodes as rectangles
with a slot for each field. Heap nodes are labeled with the
name of the class prefixed with the statement number of the
allocator. Summary nodes are suffixed with an asterisk(*).
Thus 2:C* represents the summary node for the allocator of
class C at statement 2. We often omit disconnected nodes
(e.g., the summary node for an allocator that is not in a
loop). Note that the linked list is represented with a self
loop on node 2:C*.
Like Chase et al [4], we distinguish objects of the same
class that were allocated by different allocators. This heuristic
is based on the observation that objects allocated by a
given allocator tend to be treated similarly. For example,
both Employee and Meeting objects might contain a nested
Date object allocated in their respective constructors (i.e.,
there are two Date allocators). By distinguishing the two
kinds of Date objects, the analysis could determine that a
Date inside of an Employee cannot be affected when the Date
inside of a Meeting is updated.
A conservative SSG for a statement s contains the following
information about the structure of the heap at run-time:
1. If there exists an edge from the node for variable X to
a heap node for allocator A, then after some execution
path ending at s (i.e., s has just been executed by the
thread of its CFG), X may point to an object allocated
by A. Otherwise, X cannot point to any object
allocated by A.
2. If there exists an edge from field F of the current heap
node for allocator B to a heap node for allocator A,
then after some execution path ending at s, the F field
for the current instance allocated by B may point to
an object allocated by A. Otherwise, the F field for
the current instance allocated by B cannot point to
any object allocated by A.
3. If there exists an edge from field F of the summary
heap node for allocator B to a heap node for allocator
A, then after some execution path ending at s, the F
field for some summarized instance allocated by B may
point to an object allocated by A. Otherwise, there is
no summarized instance allocated by B whose F field
points to any object allocated by A.
4. For each of the above three cases, if the heap node for
allocator A is the current node, then the reference must
be to the current instance allocated by A, otherwise
the reference is to some summarized instance allocated
by A.
Note that the useful information is the lack of an edge. One
graph is more precise than another if it has a strict subset
of its edges.
5.3 The Algorithm
We use a modified dataflow algorithm to compute, for each
statement, a conservative SSG with as few edges as possi-
ble. Initially, each statement has an SSG with no edges.
A worklist is initialized to contain the start statement of
main(). On each step, a statement is removed from the
head of the worklist and processed, possibly updating the
SSGs for that statement and all statements in other CFGs.
If any edges are added to the statement's SSG, the successors
of the statement in its CFG and any dependent statements
in other CFGs are added to the tail of the worklist. One
statement is dependent on another if they may reference the
same variable at run-time: they select the same static variable
or instance variable. The algorithm terminates when
the worklist is empty.
To process a statement, we employ three operations on
SSGs: join, step, and summarize. First, we compute the pre-
SSG for the statement by joining the SSGs of all immediate
predecessors in its CFG. SSGs are joined by taking the union
of their edge sets (this is an any-paths analysis). The pre-
SSG is then updated by the step operation (discussed below)
in a manner reflecting the semantics of the statement to
produce the post-SSG. Finally, if the statement is the last
statement of a loop block, the post-SSG is summarized to
produce the new version of the statement's SSG, otherwise
the post-SSG is the new version. We summarize an SSG by
redirecting all edges to/from the current nodes of allocators
within the loop block to their corresponding summary nodes
(see the SSGs for statement 6 in Figure 6).
The step operation uses abstract interpretation to up-date
the SSG (an abstract representation of the run-time
heap) according to the statement's semantics. Only assignments
to reference variables need be considered; other
statements cannot add edges to the SSG (i.e., the post-SSG
equals the pre-SSG). Each pointer expression has an l-value
and an r-value, defined as follows. The l-value of a variable
is the variable's node. The l-value of a field selector expression
x.f is the set of f fields of the nodes in the r-value
of x. The r-value of an expression is the set of heap nodes
pointed to from the expression's l-value, or, in the case of
an allocator, the current node for that allocator.
The semantics of an assignment
whether the left hand side is a stack variable or a local static
variable. If e1 is either a stack variable or a local static vari-
able, we perform a strong update by removing all edges out
of the node in l-value(e1) and then adding an edge from the
node in l-value(e1) to each node in r-value(e2 ). Otherwise,
we perform a weak update by simply adding an edge from
each node/field in l-value(e1) to each node in r-value(e2 ).
Any edges added to a statement's SSG (for a step or
summarize operation) are also added to the SSGs for all
statements in other CFGs; we assume threads may be scheduled
arbitrarily, thus any statement in another thread may
witness this reference value.
The execution of a thread allocator new Thread(x) is
treated as an assignment of x to a special field runnable in
the Thread object (this reflects the inlining of the constructor
for Thread). Let X be the set of classes to which the
object referenced by x might belong (i.e., all subclasses of
the type of x). When the allocator is processed, we add to
the worklist the start statement of every CFG for a run()
method of a class in X (i.e., the start statement of a CFG
is implicitly dependent on every thread allocator that might
start 3 the thread).
When a CFG for a run() method of a class C accesses
an instance variable of the current object this (e.g., the
expression next in the Producer's run() method of Figure 1)
the r-value of this is the set of heap nodes for class C pointed
to by runnable fields of heap nodes for class Thread (i.e., we
do not associate a given thread/CFG with a specific thread
allocator).
5.4 Computing One-to-One Relationships
The summarized information gathered by the above analysis
is not sufficient for the lock reduction. An SSG edge from the
summary node for an allocator A to the summary node for
allocator B indicates that objects allocated by A may point
to objects allocated by B. We need to know if each object
allocated by A points to a different object allocated by B;
only then would holding the lock of an A object protect a
variable access in the nested B object.
We can conservatively estimate this information when
SSGs are summarized and updated as follows. An edge
from the summary node for A to the summary node for
B is marked one-to-one if each A points to a different B at
run-time. If A and B are in the same loop block, then an
edge from some field of the summary node of A to the summary
node of B, when first added to an SSG by a summarize
operation, is marked one-to-one. If the field of the summary
node of A is subsequently updated by a step operation in
such a way that another edge to the summary node of B
would have been added, then the edge is no longer marked
one-to-one.
This method is based on the observation that nested objects
are almost always allocated in the same loop block as
their enclosing object (often in the enclosing object's con-
structor). Given a constructor or loop body that allocates
an object, allocates one or more nested objects, and links
these objects together, the one-to-one relationships between
the objects are recorded in the SSG as arcs between the
current nodes of the allocators. When these nodes are summarized
at the end of the loop block, this information is
then preserved as annotations on the arcs between the summary
nodes. In fact, this is the motivation for distinguishing
3 Technically, the thread is started when its start() method is
called, but since we are not using any thread scheduling informa-
tion, assuming the thread starts when allocated produces the same
SSGs.
current from summarized instances/nodes.
5.5 Example
Consider the Java source in Figure 6. The first SSG in
Figure
6 is the post-SSG for statement 6 the first time it
is processed (i.e., before any summary information exists).
The second SSG is the result of summarizing this SSG. Note
that, since nodes 3:B and 5:A are summarized together, the
arc from field a2 of 3:B* to 5:A* is labeled as one-to-one
(1-1), but since 2:A is a current node, there is no one-to-one
relationship between field a1 of 3:B* and 2:A (nor would
there be if a loop were added around this code and 2:A was
summarized).
The last SSG is the final SSG for statement 9 (the end
of the method). After statement 7, the Thread allocated
there may have access to the A allocated by statement 2,
while after statement 8, the a1 field of some B may point to
some A allocated at statement 5. Note that stack variable
b is out of scope at statement 9 and thus can be removed
from the SSG. The arc from 2:A to 0:A is added by statement
0, which is placed on the worklist when statement 7
is processed. Although we have not shown the final SSGs
for statements 1-8, all these SSGs would contain this arc,
even though the reference value it represents cannot appear
until after statement 7; no thread scheduling information is
considered.
5.6 Complexity
Given a program with S statements and V variables and
allocators, our algorithm must construct S SSGs each containing
O(V ) nodes and up to O(V 2 ) edges. The running
time to process a statement is (at worst) proportional to the
total number of edges in all SSGs, as is the number of times
a statement can be processed before a fixpoint is reached.
Thus the worst case running time is O(S 2 V 4 ). Here, S is
the number of statements after inlining all procedure calls,
which could produce an exponential blowup in the number
of statements.
Despite this complexity, we do not anticipate the cost of
the reference analysis to be prohibitive. First, based on the
application of the algorithm to several small examples, we
believe the average complexity to be much lower. SSGs are
generally sparse; many edges in a typical SSG would violate
Java's type system and could not be generated by the
analysis. Also, very few edges are added to a statement's
SSG after it has been processed once, thus each statement is
typically processed only a few times. Second, S and V refer
to the number of modeled statements and variables-in a
typical analysis, only a fraction of the program will be mod-
eled. The reference analysis does not model variables having
primitive (i.e., non-reference) types, nor need it model
statements manipulating such variables exclusively. Also, a
program requirement might involve only a small subset of
the program's classes; the rest of the program need not be
represented.
6 Applying the Reductions
In this section, we explain how to use the information collected
by the reference analysis to apply the local variable
and lock reductions.
class A implements Runnable -
A a3;
void run() -
0: new A();
class
A a1;
A a2;
new A();
class Main -
static void main(.) -
2: A a = new A();
while (.) -
3: new B(a);
inlined constructor
4:
5: new A();
if (.)
7: (new Thread(a)).start();
else
8:
a
x
5:A
a
x
a
x
5:A
a3
0:A
a3
0:A
a3
0:A
2:A
5:A*
9: (final)
2:A
5:A*
(before summary, first iteration)
2:A
3:B
a3
a3
a3
a3 a3
3:B 3:B*
(after summary)
7:Thread
3:B*
a3
a3
Figure
Reference Analysis Example
6.1 Local Variable Reduction
Applying the local variable reduction is straightforward.
The set of heap nodes in an SSG that are local to a given
thread are those that are accessible only from stack or static
variables local to the thread. All heap variables are accessed
with expressions of the form ref.id where ref is a reference
expression and id is the name of the instance variable. The
variable accessed by such an expression is local to the thread
if the nodes in the r-value of ref are local to the thread in
the pre-SSG for the statement.
Note that heap variables may be local for some statements
and non-local for others. A common idiom is for an
object to be allocated, initialized, and then made available
to other threads (e.g., the IntBuffer object of the example
in
Figure
1). The reference analysis can determine that the
instance variables of such an object are local until the object
is made available to other threads.
6.2 Lock Reduction
Applying the lock reduction is more complex. We need to
determine whether a variable is protected by a lock. In
general, the relationship between a variable and the lock
that protects it may be too elaborate to determine with
static analysis. Here, we propose a heuristic that we believe
is widely applicable and, in particular, works for the locking
design patterns given in [10]. The heuristic assumes that
the relationship between the object containing the variable
and the object containing the lock matches the following
general pattern: either the lock object is accessible from the
variable object, or vice versa, or both are accessible from a
third object, or the lock and variable are in the same object.
This pattern can be expressed in terms of three roles:
the root, the lock, and the variable. The lock object contains
the lock, the variable object contains the variable, and
from the root object, the other two objects are accessible.
Each role must be played by exactly one object, but one object
may play multiple roles. For the expression data[i] in
the bounded buffer example, the IntBuffer object is both
the root and the lock object, while the int array referenced
by data is the variable object. For the expression count,
the IntBuffer object plays all three roles. For the expression
salary in the splitting locks example of Figure 4, the
Programmer object plays the root and variable roles, while
the Object referenced by salaryLock plays the role of lock.
We consider all static variables to be fields of a special
environment object called env, which can play the roles of
variable and root, but not the role of lock. This generalizes
the pattern to include the case where the lock object or
the variable object are accessible from static variables, and
the case where the variable is static. Also, we fully qualify
all expressions by prepending this to expressions accessing
variables in the current instance, and by prepending env to
all static variable accesses.
For each static/heap variable, we want to determine
whether there exists a lock that protects the variable (i.e.,
any thread accessing the variable must be holding the lock).
Static variables are represented by variable nodes, heap variables
by fields of heap nodes, and locks by heap nodes in the
SSG. Essentially, we use the expressions accessing the variable
and lock to identify the lock object; we can interpret
the expressions (abstractly) using their SSGs.
Formally, for each static/heap variable v, we want to
compute Protect(v): the set of locks protecting v. For each
such v, let Access(v) be the set of program expressions that
may access v; these sets can be constructed during the reference
analysis. For each expression Ev in Access(v), we
compute Protect(v; Ev ): the set of locks the thread is holding
at Ev protecting v. Since a lock must protect a variable
everywhere:
If the lock is a summary node, then the variable must be
a field of a summary node; the interpretation is that each
variable object is protected by a unique lock object.
Given an expression Ev accessing v, we compute
is a lock expression
at Ev if it is the argument to some enclosing synchronized
statement. For each E ' , we define a triple (Er
is the root expression, which is the common prefix of E '
and Ev , S ' is the lock selector, which is the part of E ' not in
, and Sv is the variable selector, which is the part of Ev
not in Er and with the final selector removed (i.e., Er:Sv is a
reference to the object containing v, not v itself). For exam-
ple, consider the expression hours in method updateHours in
Figure
4. The fully qualified expression 4 accessing the variable
is this.hours, a lock expression is this.hoursLock,
and this pair yields the triple (this; hoursLock; -). Note
that indicates the lock and root objects are the same,
while indicates that the variable and root objects are
the same.
Given Ev and (Er ; S ' ; Sv ), we identify a candidate lock '
in the SSG as follows. For an SSG node n and a selector S,
n:S is the set of nodes reached from n by following S, while
is a field of an object in n:Sg
is the set of SSG nodes such that applying selector S to these
nodes may reach the object containing variable v. First, in
the pre-SSG for Ev , we compute the set of possible root
objects for Ev 's access to v:
If R contains exactly one node, then this node is the candidate
root r and we compute the set of possible locks
in the pre-SSG of E ' . If L contains exactly one node ', then
this node is the candidate lock.
We include ' in Protect(v; Ev ) if we can deduce from the
SSGs that, for each instance of v at run-time, there is a
unique instance of ' held by the thread. Note that this does
not follow immediately since r, ', and the SSG nodes on the
paths from r to ' and from r to v might represent multiple
objects at run-time. Nevertheless, we can still conclude that
for each variable represented by v at run-time there is a
unique lock represented by ' if both of the following are
true:
1. For each variable represented by v at run-time, there
is a unique root object represented by r. This holds
r is a current node, or all arcs on
the path selected by Sv are one-to-one arcs between
summary nodes.
2. For each root object represented by r at run-time,
there is a unique lock object represented by '. This
4 In our analysis, the method will have been inlined and the this
variable replaced with a new temporary holding this implicit param-
eter. In addition, a simple propagation analysis can be used to allow
recognition of the pattern even if multiple selectors are decomposed
into a series of assignments (e.g., x.f.g expressed as
holds provided that is a current node, or
all arcs on the path selected by S ' are one-to-one arcs
between summary nodes.
A variable v is protected if Protect(v) is nonempty. A transformation
may be made invisible if it is not f-observable
and all variables it might access are protected or local.
Note that inaccuracy in the reference analysis leads to
a larger model, not an incorrect model. If we cannot determine
that a variable is local or protected by a lock, then
a transformation accessing that variable will be visible; the
transition system will have more states, but will still be represent
all behaviors of the (possibly restricted) program.
6.3 Example
Consider the bounded buffer example in Figure 1. The SSGs
for all statements in the producer and consumer run() methods
are isomorphic to the heap structure shown at the top of
the figure (there would also be nodes for the stack variables).
From these SSGs, we can deduce that variable next in the
Producer object is local to the producer thread. Thus, for a
formula f that does not depend on next, the transformation
incrementing next may be invisible.
Also in the bounded buffer example, the variable
data[.] in the array object and all the instance variables
of the IntBuffer class are protected by the lock of
the IntBuffer object. Thus, for a formula that does not
depend on these variables, the sequence of transformations
representing the methods put() and get() may be combined
into two transformations: one to acquire the lock, the
other to update the variables and release the lock.
Although a complete program is not shown for the splitting
locks example of Figure 4, each allocator for Programmer
would produce an SSG subgraph isomorphic to the heap
structure shown at the top of the figure. The arcs from
a summary Programmer node to its Object nodes would be
one-to-one. The analysis could determine that each instance
variable hours is protected by the Object accessible via field
hoursLock.
7 Conclusion
We have proposed a method for using static pointer analysis
to reduce the size of finite-state models of concurrent
Java programs. Our method exploits two common design
patterns in Java code: data accessible by only one thread,
and encapsulated data protected by a lock.
The process of extracting models from source code must,
to some degree, be depended on the source language. Although
our presentation was restricted to Java, many aspects
of our method are more widely applicable and could
be used to reduce finite-state models of programs with heap
data and/or a monitor-like synchronization primitive (e.g.,
Ada's protected types).
The method is currently being implemented as part of a
tool intended to provide automated support for extracting
finite-state models from Java source code. Although we have
no empirical data on the method's performance at this time,
the effectiveness of virtual coarsening for reducing concurrency
models is well known, and the manual application of
the method to several small examples suggests that many
transitions can be made invisible for a typical formula.
With the arrival of Java, concurrent programming has
entered the mainstream. Finite-state verification technology
offers a powerful means to find concurrency errors, which
are often subtle and difficult to reproduce. Unfortunately,
extracting the finite-state model of a program required by
existing verifiers is tedious and error-prone. As a result,
widespread use of this technology is unlikely until the extraction
of compact mathematical models from real software
artifacts is largely automated. Methods like the one
described here will be essential to support such extraction.
Acknowledgements
Thanks are due to George Avrunin for helpful comments on
a draft of this paper.
--R
Formalization of properties of parallel programs.
Automated analysis of concurrent systems with the constrained expression toolset.
Compositional verification by model checking for counter examples.
Analysis of pointers and structures.
Timing analysis of Ada tasking pro- grams
Protocol verification as a hardware design aid.
Using state space reduction methods for deadlock analysis in Ada tasking.
Data flow analysis for verifying properties of concurrent programs.
Elements of style: Analyzing a software design feature with a counterexample detector.
Concurrent Programming in Java: Design Principles and Patterns.
Checking that finite state concurrent programs satisfy their linear specifica- tions
Static infinite wait anomaly detection in polynomial time.
--TR
Integrated concurrency analysis in a software development enviornment
Analysis of pointers and structures
Automated Analysis of Concurrent Systems with the Constrained Expression Toolset
Using state space reduction methods for deadlock analysis in Ada tasking
Data flow analysis for verifying properties of concurrent programs
Compositional verification by model checking for counter-examples
Elements of style
Timing Analysis of Ada Tasking Programs
Checking that finite state concurrent programs satisfy their linear specification
Concurrent Programming in Java
Protocol Verification as a Hardware Design Aid
--CTR
James C. Corbett , Matthew B. Dwyer , John Hatcliff , Shawn Laubach , Corina S. Psreanu , Robby , Hongjun Zheng, Bandera: extracting finite-state models from Java source code, Proceedings of the 22nd international conference on Software engineering, p.439-448, June 04-11, 2000, Limerick, Ireland
Gleb Naumovich , George S. Avrunin , Lori A. Clarke, Data flow analysis for checking properties of concurrent Java programs, Proceedings of the 21st international conference on Software engineering, p.399-410, May 16-22, 1999, Los Angeles, California, United States
Klaus Havelund , Mike Lowry , John Penix, Formal Analysis of a Space-Craft Controller Using SPIN, IEEE Transactions on Software Engineering, v.27 n.8, p.749-765, August 2001
Gleb Naumovich , George S. Avrunin , Lori A. Clarke, An efficient algorithm for computing
MHP
Jonathan Aldrich , Emin Gn Sirer , Craig Chambers , Susan J. Eggers, Comprehensive synchronization elimination for Java, Science of Computer Programming, v.47 n.2-3, p.91-120, May
Jong-Deok Choi , Manish Gupta , Mauricio Serrano , Vugranam C. Sreedhar , Sam Midkiff, Escape analysis for Java, ACM SIGPLAN Notices, v.34 n.10, p.1-19, Oct. 1999
Pramod V. Koppol , Richard H. Carver , Kuo-Chung Tai, Incremental Integration Testing of Concurrent Programs, IEEE Transactions on Software Engineering, v.28 n.6, p.607-623, June 2002
James C. Corbett, Using shape analysis to reduce finite-state models of concurrent Java programs, ACM Transactions on Software Engineering and Methodology (TOSEM), v.9 n.1, p.51-93, Jan. 2000
John Whaley , Martin Rinard, Compositional pointer and escape analysis for Java programs, ACM SIGPLAN Notices, v.34 n.10, p.187-206, Oct. 1999
Jong-Deok Choi , Manish Gupta , Mauricio J. Serrano , Vugranam C. Sreedhar , Samuel P. Midkiff, Stack allocation and synchronization optimizations for Java using escape analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.25 n.6, p.876-910, November
Premkumar T. Devanbu , Stuart Stubblebine, Software engineering for security: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.227-239, June 04-11, 2000, Limerick, Ireland
John Penix , Willem Visser , Seungjoon Park , Corina Pasareanu , Eric Engstrom , Aaron Larson , Nicholas Weininger, Verifying Time Partitioning in the DEOS Scheduling Kernel, Formal Methods in System Design, v.26 n.2, p.103-135, March 2005 | model extraction;finite-state verification;static analysis |
271798 | Improving efficiency of symbolic model checking for state-based system requirements. | We present various techniques for improving the time and space efficiency of symbolic model checking for system requirements specified as synchronous finite state machines. We used these techniques in our analysis of the system requirements specification of TCAS II, a complex aircraft collision avoidance system. They together reduce the time and space complexities by orders of magnitude, making feasible some analysis that was previously intractable. The TCAS II requirements were written in RSML, a dialect of state-charts. | Introduction
Formal verification based on state exploration can be considered
an extreme form of simulation: every possible behavior
of the system is checked for correctness. Symbolic model
checking [?] using binary decision diagrams (BDDs) [?] is
an efficient state-exploration technique for finite state sys-
tems; it has been successful on verifying (and falsifying)
many industry-scale hardware systems. Its application to
non-trivial software or process-control systems is far less
mature, but is increasingly promising [?, ?, ?, ?]. For ex-
ample, we obtained encouraging results from applying symbolic
model checking to a portion of a preliminary version
of the system requirements specification of TCAS II, a complex
software avionics system for collision avoidance [?].
The full requirements, comprising about four hundred pages,
were written in the Requirements State Machine Language
This work was supported in part by National Science
Foundation grant CCR-970670. W. Chan was supported in
part by a Microsoft graduate fellowship.
(RSML) [?], a hierarchical state-machine language variant
of statecharts [?].
By representing state sets and relations implicitly as BDDs
for symbolic model checking, the sheer number of reachable
states is no longer the obstacle to analysis. Instead, the limitation
is the size of the BDDs, which depend on the structure
of the system analyzed. Considerable effort on hardware
formal verification has been focused on controlling the BDD
size for typical circuits. However, transferring this technology
to new domains may require alternative techniques and
heuristics to combat the BDD-blowup problem. In this pa-
per, we present modifications to the algorithms implemented
in a symbolic model checker (SMV [?]), modifications to
the model, as well as a simple abstraction technique, to improve
the time and space efficiency of the TCAS II analy-
sis. Experimental results show that the techniques together
reduce the time and space complexities by orders of magni-
these improvements have made feasible some analysis
that was previously intractable.
The specific techniques we discuss in the paper are:
ffl Short-circuiting to reduce the number of BDDs generated
by stopping the iterations before a fixed point is
reached.
Managing forward and backward traversals, to reduce
the size of the BDD generated at each iteration. Notably,
we improve backward traversals by making certain invariants
(in particular, that some events are mutually ex-
clusive) explicit in the search.
ffl More sophisticated conjunctive partitioning of the transition
relation and applying disjunctive partitioning in
an unusual way, to reduce the size of the intermediate
BDDs at each iteration. Further improvements were
made by combining the two techniques to obtain DNF
partitioning.
ffl Abstraction to decrease the number of BDD variables.
Given a property to check, we perform a simple dependency
analysis to generate a reduced model that is guaranteed
to give the same results as with the full model.
Techniques like short-circuiting and abstraction are conceptually
straightforward and applicable to many systems. Most
other techniques were designed to exploit the simple synchronization
patterns of TCAS II (for example, most events
are mutually exclusive, and most state machines are not enabled
simultaneously), and we believe they can also help analyze
other statecharts machines with simple synchronization
patterns.
We provide experimental results showing how each of these
techniques affected the performance of the TCAS II analysis.
PSfrag replacements
A
w[:a]=y
w[a]=x
Figure
1: An example of statecharts
The effects of combinations of the improvements are shown
in addition to the individual effects. We focus on reachability
problems, because most properties of TCAS II we were
interested in fall into this class. However, in principle, all of
the techniques should benefit general temporal-logic model
checking as well. We conclude the paper with discussion on
some related techniques.
Background
In this section, we give a brief overview of statecharts and
RSML. We then turn our attention to symbolic model check-
ing. Finally, we review how we applied symbolic model
checking to the TCAS II requirements.
2.1 RSML and Statecharts
The TCAS II requirements were written in RSML, a language
based on statecharts. Like other variants of statecharts,
RSML extends ordinary state-machine diagrams with state
hierarchies; that is, every state can contain orthogonal or mutually
exclusive child states. However, this feature does not
concern us in this paper (the state hierarchy in the portion of
TCAS II that we analyzed is shallow and does not incur special
difficulties in model checking). Instead, we can think of
the system consisting of a number of parallel state machines,
communicating and executing in a synchronous way.
Figure
?? above gives a simple example with two parallel
state machines A and B. If A is in local state 0, we say that
the system is in state A.0. State machines are synchronized
using events. Arrows without sources indicate the start local
states. Other arrows represent local transitions, which
are labeled with the form u[c]=v where u is a trigger event,
c is the guarding condition and v is an action event. The
guarding condition is simply a predicate on local states of
other states machines and/or inputs to the system; for exam-
ple, a guarding condition may say that the system is in B.0
and an input altitude is at least 1 000 meters. (In RSML, the
guarding condition is specified separate from the diagram in
a tabular form called AND/OR table, but we use the simpler
statecharts notation instead.) The guarding condition and the
action are optional. The general idea is that, if event u occurs
and the guarding condition c either is absent or evaluates to
true, then the transition is enabled.
Initially some external events along with some (possibly nu-
meric) inputs from the environment arrive, marking the beginning
of a step. The events may enable some transitions
as described above. A maximal set of enabled transitions,
collectively called a microstep, is taken-the system leaves
the source local states, enters the target local states, and generates
the action events (if any). All events are broadcast to
the entire system, so these generated events may enable more
transitions. The events disappear after one microstep, unless
they are regenerated by other transitions. The step is finished
when no transitions are enabled. The semantics of RSML assume
the synchrony hypothesis: During a step, the values of
the inputs do not change and no new external events may ar-
rive; in other words, the system is assumed to be infinitely
faster than the environment.
In
Figure
??, assume that w is the only external event, a is a
Boolean input, and the system is currently in A.0 and B.0.
When w arrives, if the input a is false, then the event y is
generated. The step is finished since no new transitions are
enabled. If instead a is true when w arrives, the transitions
from A.0 to A.1 and from B.0 to B.1 are simultaneously
taken and event x is generated, completing one microstep.
Then a second microstep starts; notice that because of the
synchrony hypothesis, the input a must be true as before and
the external event w cannot occur. So only the transition
from B.1 to B.2 is enabled and taken, generating event z
and finishing the step.
Subtle but important semantic differences exist among variants
of statecharts. The semantics of STATEMATE [?],
another major variant of statecharts, are close to those of
RSML. STATEMATE does not enforce the synchrony hypothesis
in the semantics, but provides it as an option in the
simulator. RSML and STATEMATE also have a richer set of
synchronization primitives and provide some sort of variable
assignments; however, these features are not important for
this paper.
2.2 Symbolic Model Checking
We now switch gears to discuss ordinary finite-state transition
systems (without state hierarchies, the synchrony hy-
pothesis, etc.) and model checking. The goal of model
checking is to determine whether a given state transition system
satisfies a property given as a temporal logic formula,
and if not, try to give a counterexample (a sequence of states
that falsifies the formula). Example properties include that a
(bad) state is never reached, and that a (good) state is always
reached infinitely often. In "explicit" model check-
ing, the answer is determined in a graph-theoretic manner
by traversing and labeling the vertices in the state graph [?].
The method is impractical for many large systems because of
the state explosion problem. Much more efficient for large
state spaces is symbolic model checking, in which the model
checker visits sets of states instead of individual states.
For illustration, we focus on the reachability problem, the
simplest and the most common kind of temporal property
checked in practice. Let Q be the finite set of system states,
Q the state transition relation, I ' Q the set of initial
states, and E ' Q a set of error states. The reachability
problem asks whether the system always stays away from the
error states E, and if not, demands a counterexample, that is,
a sequence of states q 0 , q
We define to compute the pre-image (or the
Start with Y iteratively compute
reaching a fixed point.
PSfrag replacements
Y
Backward Traversal
fixed point
Figure
2: An algorithm for computing Pre (E)
1. Let Q 0 be any nonempty subset of Pre (E) " I.
Iteratively compute Q
reaching E.
PSfrag replacements
Forward Traversal
2. Start with some qm 2 Qm " E and iteratively pick some
to obtain a counterexample q 0 ,
PSfrag replacements
Qm
Figure
3: An algorithm for counterexample search
weakest pre-condition) of a set of states under the transition
relation R:
Intuitively, it is the set of states that may reach some state
in S in one transition. Then we can characterize the decision
problem of reachability in a set-theoretic manner using
fixed points: Determine whether I " Pre (E) is empty, where
Pre (E) is the set of states that may eventually reach an error
state. More specifically, it is the smallest state set Y that
satisfies
Its existence is guaranteed by the finiteness of Q and the
monotonicity of Pre. Figure ?? shows an iterative algorithm
for computing this fixed point. The set Y i is the states that
may reach an error state in at most i transitions. Many other
temporal properties can be similarly defined and computed
using (possibly multiple or nested) fixed points [?].
If the intersection of Pre (E) and the initial states I is empty,
then the set E is not reachable and we are done. Otherwise,
we would like to find a counterexample. We first define
Post post-images:
In other words, Post(S) is the set of states reachable from S
in one transition. Figure ?? shows a counterexample search
algorithm. The set Q 0 can be any nonempty subset of the
intersection, but it is convenient to choose Q 0 to be an arbitrary
singleton set. The set Q i is the states that are reachable
from Q 0 in at most i transitions. We obtain a counterexample
by tracing backward from Qm " E. (We will improve this
algorithm later.)
The crucial factor for efficiency is the representation for state
sets. Notice that the state space Q can be represented by
a finite set of variables X , such that each state in Q corresponds
to a valuation for the variables and no two states correspond
to the same valuation. For finite state systems, we
can assume without loss of generality that each variable is
Boolean. A set of states S is then symbolically represented
as a Boolean function S(X) such that a state is in the set if
and only if it makes the function true. The transition relation
of states can be similarly represented as a Boolean function
is a copy of X and represents the next
state. Intersection, union and complementation on sets or
relations respectively becomes conjunction, disjunction and
negation on Boolean functions. Now the problem of representation
of state sets is reduced to that of Boolean functions.
Empirically, the most efficient representation for Boolean
functions is BDDs [?]. They are canonical, with efficient implementation
for Boolean operations. For example, the time
and space complexities of computing the conjunction or disjunction
of two BDDs are at most the product of their sizes;
usually the complexities observed in practice are still lower.
Negation and equivalence checking can be done in constant
time. BDDs are often succinct, but this relies critically on a
chosen linear variable order of the variables in X .
We can now represent a state set S and the transition relation
R as BDDs and compute the pre-image and post-image
of S as follows:
The notation 9X refers to existentially quantifying out all
the variables in X . In addition to Boolean operations and
equivalence checking, operations like existential quantification
and variable substitution can also be performed, so the
algorithms in Figures ?? and ?? (and similar algorithms for
many temporal logics such as CTL [?]) can be implemented
using BDDs. Thanks to the succinctness of BDDs and the efficiency
of their algorithms, some systems with over 10 120
states can be analyzed [?].
2.3 Symbolic Model Checking for TCAS II
We analyzed the TCAS II requirements using a symbolic
model checker SMV (Version 2.4.4). SMV uses algorithms
similar to those in Figures ?? and ??. A notable difference is
that in Figure ??, instead of computingY
uses the equivalent recurrence Y
with the advantage that Y usually requires a much
smaller BDD than Y i does, resulting in faster pre-image com-
putation. (In fact, it is sufficient to compute the pre-image of
any Z with Y apply
to the computation of each Q i in Figure ??.
Because SMV does not support hierarchical states and other
RSML features directly, we had to translate the requirements
into an ordinary finite-state transition system in the
SMV language. The requirements consist of two main parts,
Own-Aircraft and Other-Aircraft, which occupy about 30%
and 70% of the document respectively. In our initial study,
we translated Own-Aircraft quite faithfully to the SMV lan-
guage, and abstracted Other-Aircraft as a mostly nondeterministic
state machine. The details of the translation, including
how the transitions, the state hierarchy and the synchrony
hypothesis were handled, as well as the properties analyzed,
were given in a previous paper [?]. Certain details about the
system model are relevant to this paper:
ffl An RSML microstep corresponds to a transition in the
SMV program, and thus a step corresponds to a sequence
of transitions.
ffl We encode each RSML event as a Boolean variable,
which is true if and only if the event has just occurred.
ffl We assume each numeric input to be discrete and
bounded, and encode each bit as a Boolean variable.
ffl To maintain the synchrony hypothesis, we prevent the
inputs from changing and the external events from arriving
when some of the variables that encode events
are true.
ffl We analyze one instance of TCAS II only, so the asynchrony
among multiple instances of the system is not an
issue.
A major source of complexity of the analysis was the tran-
sitions' guarding conditions, some of which occupy many
pages of description. They contain predicates of local states
and of the input variables, and may involve complex arith-
metic. While many other researchers conservatively encode
each arithmetic predicate as an independent Boolean variable
[?,?, ?], we encode each input bit as a Boolean variable,
resulting in more accurate analysis at the expense of more
Boolean variables. In addition, a guarding condition can refer
to any part of the system, so the interdependencies between
the BDD variables are high. These all imply relatively
large BDDs for guarding conditions.
On the plus side, the control flow of Own-Aircraft is simple,
and concurrency among the state machines in Own-Aircraft
is minimal. As we will see, some of the techniques presented
later attempt to exploit these simple synchronization
patterns.
Short-Circuiting
It is easy to see that in Figure ??, we do not need to compute a
fixed point when the error states are reachable-we can stop
once the intersection of some Y i and I is not empty, because
all we need is an element in the intersection. This short-circuiting
technique may substantially reduce the time and
space used when a short counterexample exists.
More generally, short-circuiting can be applied to the outermost
temporal operator in temporal-logic model checking
(however, the reduction obtained is probably less than
Start with some q iteratively pick some
to obtain a counterexample q 0 , q 1 ,
PSfrag replacements
Y
I
Figure
4: A simplified algorithm for counterexample
search
in reachability analysis, because only one of the many fixed
points can be stopped prematurely.)
4 Forward vs. Backward Traversals
Fixed-point computation or counterexample search can be
done either forward or backward. In this section we elaborate
on their performance difference in our analysis. In short,
backward traversals generate smaller BDDs and are a big win
for our system. They can be further improved by incorporating
certain invariants to prune the searches.
4.1 Improved Counterexample Search
During the analysis of TCAS II, we found that when a property
was disproved in a few minutes, finding a counterexample
might take hours. A coauthor of a previous paper subsequently
simplified the counterexample search algorithm, resulting
in substantial speedup [?]. This is the only technique
described here that was used in that study.
The forward traversal in the first part of Figure ?? is the bot-
tleneck. For our system, the sequence of post-images requires
large BDDs. However, we can eliminate this step if
we remember every Y i computed in Figure ?? (our actual implementation
stores the difference Y
Our modification, illustrated in Figure ??, is by no means
innovative and should be considered natural. 1 A disadvantage
of the algorithm is the use of additional memory to store
the state sets, which is wasted in case the error states are not
reachable. Nevertheless, the dramatic speedup made possible
far outweighs the modest additional memory requirements
An important question remains: Why is the backward traversal
in
Figure
?? much more efficient than the forward traversal
in
Figure
??? The inefficiency of forward traversals is
also witnessed by SMV's inability to compute the set of
reachable states of the system. Finding the reachable state
set by searching forward from the initial states is a common
technique in hardware verification; the set can be used
to help analyze other temporal properties and synthesize the
Indeed, if we search forward to find the reachable state
set, SMV can optionally use a similar counterexample search
algorithm, but it is not used with the default backward traversal
PSfrag replacements A
x[a]=y x[b]=y x[b]=y
x[a]=y
Figure
5: A state machine with local invariants
circuit.
A backward traversal often takes fewer iterations to reach a
fixed point than a forward traversal, because the set of error
states is usually more general than the set of initial states.
However, the problem here is not the number of iterations,
but rather, the size of the BDDs generated. In general, we observe
that in backward traversals, the BDDs usually have between
hundreds to at most tens of thousands of BDD nodes,
while in forward traversals, they can be two or more orders
of magnitude larger. Nevertheless, the verification of many
hardware systems tends to benefit, rather than suffer, from
forward traversals. For example, Iwashita et al. report significant
speedup in CTL model checking for their hardware
benchmarks when forward instead of backward traversals are
used [?].
Partly inspired by Hu and Dill [?], we believe that the inefficiency
is mainly due to the complex invariants of TCAS II,
which are maintained by forward but not backward traver-
sals. As an example, consider the state machine in Figure ??.
If event y is only generated in A, then an invariant of the system
is that, whenever event y has just occurred, the machine
is in A.0 if and only if condition a is true. If the BDD for
a is large, so will the BDD for the invariant. Even if they
are small, there are likely to be many such implicit invariants
in the system, and their conjunction may have a large
BDD representation. In addition, invariants may globally
relate different state machines, also likely to result in large
BDDs. Forward traversals maintain all such invariants, so
intuitively the BDDs for forward traversals tend to blow up
in size. In low-level hardware verification, the BDDs often
remain small, because each invariant is usually localized and
involves only a small number of state variables. This is however
not the case in TCAS II.
For backward traversals, the situation is quite different. For
example, there are no counterparts of the invariant mentioned
above when backward traversals are used, because the truth
value of a does not imply the state of the system before the
microstep. Certainly, some different (backward) invariants
are maintained in backward traversals, but they tend to depend
on the states from which the search starts, and their
BDDs tend to be smaller for our system.
4.2 Improved Backward Traversals Using Invariants
Interestingly, the main disadvantage of backward traversals
is also that (forward) invariants are not maintained. Some in-
variants, particularly those with small BDDs, can help simplify
the BDDs of state sets, and can speed up backward
traversals if they are incorporated into the search. In the
context of statecharts, many systems have simple synchronization
patterns, which are lost in backward traversals. A
particular invariant that we find useful to rectify this prob-
PSfrag replacements
A
u[a]=v
v[b]=w
w[c]=x
Figure
system with a linear structure
lem is the mutual exclusion of certain events. We illustrate
this idea with an example.
Consider the system in Figure ??. Assuming u is the only external
event, there is no concurrency in the system-at most
one local transition can be enabled at any time. Forward
traversals do not explore concurrent executions of the state
machines.
However, in backward traversals, the analysis may be fooled
to consider many concurrent executions, which are not
reachable. Suppose we want to check whether the system
can be in B.1 and C.1 simultaneously. Traversing back-
ward, we find that in the previous microstep, the system may
be in (B.0;C.1), (B.1;C.0), or (B.0;C.0). The last
case, however, is not possible, because events v and w cannot
occur at the same time. (Notice that this is true only if we
assume the synchrony hypothesis.) Tracing more iterations,
we can see that the search considers not only concurrent executions
but also many unreachable interleaving ones. The
BDDs thus may blow up if the guarding conditions are complex
Fortunately, we can greatly simplify the search by observing
that all the events are mutually exclusive. This invariant can
be incorporated into the traversals by either intersecting it
with the state sets or using it as a care-set to simplify the
BDDs [?].
To find out such a set of mutually exclusive events, we may
perform a conservative static analysis on the causality of the
events. Alternatively, the designer may know which events
are mutually exclusive, because the synchronization patterns
should have been designed under careful consideration. To
confirm the mutual exclusion, we may verify, using model
checking or other static analysis techniques, that the states
with
are not reachable, where S is the set of state variables encoding
the events under consideration. In the case of TCAS II,
a large part of our model behaves similarly to the machine
in
Figure
??, and the set of mutually exclusive events was
evident.
Partitioned Transition Relation
Apart from the BDD size for state sets, another bottleneck of
model checking is the BDD size for the transition relation,
which can be reduced by conjunctive or disjunctive partitioning
[?]. The former can be used naturally for TCAS II,
and we have modified SMV to partition the transition relation
more effectively. We also apply disjunctive partition-
ing, which is normally used only for asynchronous systems.
Combining the two techniques, we obtain DNF partitioning.
As we will see, the issues in this section are not only the
BDD size for the transition relation, but also the size of the
intermediate BDDs generated for each image computation.
5.1 Background
In this subsection, we review the idea of conjunctive and
disjunctive partitioning, described in Burch et al. The
transition relation R is sometimes given as a disjunction
and the BDD for R can be huge even
though each disjunct has a small BDD. So instead of computing
a monolithic BDD for R, we can keep the disjuncts
separate. The image computations can be easily modified
by distributing the existential quantification over the disjunc-
tion. For pre-image computation, we thus have
So we can compute the pre-image without ever building the
BDD for R. Post-image computation is symmetric.
If, however, R is given as a conjunction C 1
we can still keep the conjuncts separate as above, but image
computations become more complicated. The problem
is that existential quantification does not distribute over con-
junctions, so it appears that we have to compute the BDD for
R anyway before we can quantify out the variables. A trick
to avoid this is early quantification. Define X 0
to be the disjoint subsets of X 0 such that their union is X 0 and
the conjunctC i does not depend on any variable
in X p for any p ! i. Consider again pre-image computation.
We compute
The intuition is to quantify out variables as early as possi-
ble, and hope that each intermediate c i for remains
small. The effectiveness of the procedure depends critically
on the choice and ordering of the conjuncts C 1 , C
5.2 Determining Conjunctive Partition
We could not construct the monolithic BDD for the transition
relation R for our model of TCAS II in hours of CPU time,
but R is naturally specified as a conjunction, so we can use
conjunctive partitioning. Although SMV supports this fea-
ture, it determines the partition in a simplistic way: An SMV
program consists of a list of parallel assignments, whose
conjunction forms the transition relation. SMV constructs
the BDDs for all assignments, and incrementally builds their
conjunction in the (reverse) order they appear in the pro-
gram. In this process, whenever the BDD size exceeds a
user-specified threshold, it creates a new conjunct in the par-
tition. So the partition is solely determined by the syntax,
and no heuristic or semantic information is used.
To better determine the partition, we changed SMV to allow
the user to specify the partition manually. We also implemented
in SMV a variant of the heuristics by Geist and
Beer [?] and by Ranjan et al. [?] to automatically determine
the partition. The central idea behind the heuristics is to select
conjuncts that allow early quantification of more variables
while introducing fewer variables that cannot be quantified
out. Our implementation of the heuristics worked quite
well; the partitions generated compared favorably with, and
sometimes outperformed, the manual partitions that we tried.
5.3 Disjunctive Partitioning for Statecharts
Disjunctive partitioning is superior to conjunctive partitioning
in the sense that ordering the disjuncts is less critical,
and that each intermediate BDD is a function of X (instead of
thus tends to be smaller. (Another advantage that
we have not exploited is the possibility of parallelizing the
image computation by constructing the intermediate BDDs
concurrently.)
Unfortunately, when the transition relation R is a conjunc-
tion, in general there are no simple methods for converting
it to a small set of small disjuncts. If we define a cover
disjunction
is the tautology, then we can indeed disjunctively partition
R by distributing R over the cover:
But for most choices of covers, each D i is still large.
For TCAS II and many other statecharts, however, we can
again exploit the mutual exclusion of certain events, say u
a
PSfrag replacements
A
Figure
7: Event x triggers two state machines.
a
In other words, a i corresponds to the states in which only u i
has just occurred, a j , none of the events have, and a j+1 , at
least two of the events have. They clearly form a cover. We
made two observations. First, we can drop a j+1 , which is
a contradiction because of the mutual exclusion assumption.
Second, most of the parallel assignments in our SMV program
are guarded by conditions on the events; for example,
an assignment that models a state transition requires the occurrence
of the trigger event. If the event is, say u
then the BDD for the assignment is applicable only
to the disjunct D i , and all the other disjuncts of the transition
relation are unaffected. So each disjunct may remain small.
Notice that to apply this technique, we have to find a set of
provably mutually exclusive events, which can be done as
described in Section ??.
5.4 DNF Partitioning and Serialization
A disadvantage of partitioning R based on events is that the
sizes of the disjuncts are often skewed. In particular, if a
single event may trigger a number of complex transitions,
its corresponding disjunct could be large. Figure ?? shows
an example in which an event x triggers two state machines.
If all the guarding conditions are complex, the BDD for the
disjunct corresponding to x may be large.
One solution to this problem is to apply conjunctive partitioning
to large disjuncts, resulting in what we call DNF
partitioning. It uses both BDD size (as in conjunctive par-
titioning) and structural information (as in disjunctive parti-
tioning) to partition the transition relation, and may perform
better than relying on either alone.
Alternatively, we may serialize the complicated microstep
into cascading microsteps to reduce the BDD size. Figure ??
illustrates this idea. We have "inserted" a new event u after
x. Note that the resulting machine has more microsteps
in a step. So although this method is effective in reducing
the BDD size, it often increases the number of iterations to
reach a fixed point. Also, the transformation may not preserve
the behavior of the system and the property analyzed.
A sufficient condition is that the guarding conditions in the
PSfrag replacements A
Figure
8: The serialized machine
machine B do not refer to machine A's local states, x is mutually
exclusive with all other events, and we are checking
a reachability property that does not explicitly mention any
of the state machines, transitions or events involved in the
6 Abstraction
In this section, we give a simple algorithm to remove part
of the system from the model that is guaranteed not to interfere
with the property being checked. For example, a state
machine may have a number of outputs (which may be local
states or events). If we are verifying only one of them, the
logic that produces other outputs may be abstracted away,
provided these outputs are not fed back to the system. The
abstraction obtained is exact with respect to the property, in
the sense that the particular property holds in the abstracted
model if and only if it holds in the original model.
6.1 Dependency Analysis
We determine the abstraction by a simple dependency analysis
on the statecharts description. Initially, only the local
states, events, transitions, or inputs that are explicitly mentioned
in the property are considered relevant to the analysis.
Then the following rules are applied recursively:
ffl If an event is relevant, then so are all the transitions that
may generate the event.
ffl If a transition is relevant, then so are its trigger event,
its source local state, and everything that appears in its
guarding condition.
ffl If a local state is relevant, then so are all the transitions
out of or into it, and so is its parent state in the state
hierarchy.
These rules are repeated until a fixed point is reached. Es-
sentially, this is a search in the dependency graph, and the
time complexity is linear in the size of the graph. It should
be evident that everything not determined relevant by these
rules can be removed without affecting the analysis result.
2 The same criterion can be applied to arbitrary CTL for-
mulas, provided we do not use the the next-time operator X,
which can count the number of microsteps. In other words,
under the assumptions, the transformation preserves equivalence
under stuttering bisimulation [?].
PSfrag replacements
w[:b]=x
w[b]=y
w[a]=y
w[:a]=x
x[:d]=y
x[d]=y
x[c]=y
x[:c]=y
Figure
9: False dependency: Event y does not depends on
any guarding condition.
6.2 False Dependency
Similar dependency analyses could also be performed by
model checkers (such as VIS [?]) on the Boolean model of
the statecharts machine. However, a straightforward implementation
would not be effective. The reason is that in the
model, an input would appear to depend on every event because
of the way we encoded the synchrony hypothesis (Sec-
tion ??). On the other hand, carrying out dependency analysis
on the high-level statecharts description does not fall prey
to such false dependencies.
Other forms of false dependencies are possible, however.
Suppose we are given the system in Figure ?? from the previous
section. From the syntax, the event u appears to depend
on both conditions a and a 0 , but in fact it does not, because
regardless of the truth values a and a 0 , event u will be generated
as a result of event x.
To detect such false dependencies, one can check whether
the disjunction of the guarding conditions of the transitions
out of a local state with the same trigger and action events
is a tautology. This can sometimes be checked efficiently
using BDDs [?]. However, the syntax of RSML and STATEMATE
allows easy detection of most false dependencies of
this kind. Notice that the self-loops in Figure ?? are solely
for synchronization-they make sure that u is generated regardless
whether there has been a local state change. To improve
the visual presentation, RSML and STATEMATE allow
specifying the generation of such events separate from
the state diagram using identity transitions and static reactions
respectively. (Actually, their semantics are slightly different
from self-loops, but the distinctions are not important
here.)
Some false dependencies are harder to detect automatically.
For example, maybe the guarding conditions involved do not
form a tautology, but in all reachable states, one of the guarding
conditions holds whenever the trigger event occurs. As
another example, in Figure ??, the event y does not depends
on any of the guarding conditions, because it is always generated
one or two microsteps after w. 3 In practice, the synchronization
of the system should be evident to the designer,
who may specify the suspected false dependencies in temporal
logic formulas, which can be verified using model check-
ing. If the results indeed show no real dependencies, this in-
3 However, if the next-time operator X is used, then y may
be considered conservatively to be dependent on a and b.
formation can be used in the dependency analysis to obtain a
smaller abstracted model of the system. In our TCAS II anal-
ysis, the synchronization of Own-Aircraft is simple enough
that false dependencies can be easily detected. However, this
method may be used for analyzing the rest of TCAS II or
other systems.
7 Experimental Results
The table above summarizes the results of applying the techniques
mentioned to our model of TCAS II. It shows the
resources (time in seconds and number of BDD nodes used
in thousands) for building the BDDs for the transition relation
R as well as the resources for evaluating six properties.
Note that the latter excludes the time spent on building the
transition relation or the resources for finding the counterex-
amples. The counterexample search took about one to two
seconds per state in the counterexample and was never a bottleneck
thanks to the algorithm in Figure ??. That algorithm
was used in all the checks, because without it, none of the
counterexamples could be found in less than one hour. The
table also shows the number of iterations needed to reach
fixed points and the length of the (shortest) counterexamples.
We performed the experiments on a Sun SPARCstation 10
with 128MB of main memory. Most successful checks used
less than 30MB of main memory.
Properties P1 through P4 refers to the properties Increase-
Descent Inhibition, Function Consistency, Transition Con-
sistency, and Output Agreement explained in a previous
paper [?]. Property P5 refers to an assertion in Britt [?,
p. 49] that Own-Aircraft should never be in two local
states Corrective-Climb . Yes and Corrective-Descend. Yes
simultaneously (comments in our version of the TCAS II re-
quirements, however, explicitly say that the two local states
are not mutually exclusive). Property P6 is somewhat con-
trived: It is simply the conjunction of P3 and P4. Since
searching simultaneously from two unrelated sets of states
tends to blow up the BDDs, checking this property provides
an easy way of scaling up the BDD size. It also mimics
checking properties involving a large part of the system. All
six properties are reachability, and are violated by the model.
An entry with - in the table indicates timeout after one hour.
We emphasize that the purpose of the data is to investigate
the general effects of the techniques on our model of
TCAS II. They are not for picking a clear winner among
the techniques, since the BDD algorithms are very sensitive
to the various parameters chosen and to the model analyzed.
Note also that the results shown here should not be compared
directly with out earlier results [?], because the models, the
parameters, and the model checking algorithms used were
different.
Full Model The first part of the table shows the results for
the full model with 227 Boolean state variables. Row 1 gives
the results for the base analysis: Two properties could not be
completed using the conjunctive partitioning as implemented
in SMV. (Actually, we implemented a small improvement
that was used in all results including the base analysis. As explained
in Section ??, an image computation step involves a
conjunction and an existential quantification. The two operations
can be carried out simultaneously to avoid building the
usually large conjunction explicitly [?]. SMV performs this
optimization except when conjunctive partitioning is used.
Building P1 P2 P3 P4 P5 P6
BDDs for R
Full Model (227 variables)
No. of fixpoint iterations 24 29 29 38 26 26
Counterexample length 15 15 11 24 17 11
Optimizations time nodes time nodes time nodes time nodes time nodes time nodes time nodes
Mistranslated Model
Optimizations time nodes time nodes time nodes time nodes time nodes time nodes time nodes
Serialized Model (231 variables)
No. of fixpoint iterations 36 41 45 54 38 38
Counterexample length 23 23 19 36 25 19
Optimizations time nodes time nodes time nodes time nodes time nodes time nodes time nodes
Abstracted Models
No. of variables 142 142 150 142 150 171
Optimizations time nodes time nodes time nodes time nodes time nodes time nodes time nodes
SC: short-circuiting MX: mutual exclusion of events CP: improved conjunctive partitioning DP: disjunctive partitioning
No. of fixpoint iterations and counterexample lengths are identical to those of the full model.
Table
1: Resources used in the analysis
We simply changed SMV to do this optimization with conjunctive
partitioning.)
As expected, short-circuiting (SC) gave savings, because the
number of iterations needed became the length of the shortest
counterexample. Incorporating the mutual exclusion of
certain events into backward traversals (MX) generally gave
an order of magnitude time and space reduction. In addition,
we could now easily disprove P5 and P6 (in particular, Britt's
claim mentioned above is provably not true in our version
of the requirements). For improved conjunctive partitioning
(CP), as mentioned in Section ??, we used a heuristic to produce
a partition, which was effective in reducing time and
space used.
Disjunctive partitioning (DP), which must be combined
with mutual exclusion of events, appeared to be inefficient
(Row 5). The reason is that one of the disjuncts of the
transition relation was large, with over 10 5 BDD nodes, at
least an order of magnitude larger than other disjuncts; this
is reflected in the table by the large number of BDD nodes
needed to construct the transition relation. We conjunctively
partitioned large disjuncts, leading to DNF partitioning (in-
dicated on Row 7 by marking both CP and DP). It performs
marginally better than pure conjunctive partitioning in
terms of time, but the space requirements were consistently
lower. Combining it with the other two optimizations, we observed
orders of magnitude improvements in time and space
(Row 8).
Mistranslated Model To further illustrate the differences
between conjunctive and DNF partitioning, we looked at a
version of the model that contains a translation error from
the RSML machine to the SMV program. It was a real bug
we made early in the study, although we soon discovered
it by inspection. The mistake was omitting some self-loops
similar to those in Figure ??. BDDs for faulty systems are often
larger than those for the corrected versions, because bugs
tend to make the system behavior less "regular". Therefore,
investigating the performance of BDD algorithms on faulty
designs is meaningful.
Interestingly, the particular partition generated by the the
heuristic performed poorly for this model (Row 10). DNF
partitioning continued to give significant time and space savings
(Row 11).
Serialized Model As mentioned above, the disjunctive
partition contains a disproportionally large BDD. We serialized
a microstep in the full model to break the large disjunct
into four BDDs of sizes about a hundred times smaller.
As expected, disjunctive partitioning now performed better
(Rows 5 vs. 14). However, since the number of microsteps
in a step increased, all partitioning techniques suffered from
the larger number of iterations needed to reach fixed points.
They all ended up performing about the same, with disjunctive
and DNF partitioning having the slight edge.
The data suggest that serializing the microstep in order to use
disjunctive partitioning is not advantageous for this model.
In general, we find the effects of serializing and its dual, collapsing
microsteps, difficult to predict. It represents a trade-off
between the complexity of image computations and the
number of search iterations.
Abstracted Models The last part of the table shows the
performance of analyzing the abstracted models obtained
by the dependency analysis in Section ??. The number of
variables abstracted away is quite large. Recall that in our
full model, we omitted most of the details in Other-Aircraft.
Many of the outputs of Own-Aircraft that are inputs to Other-
Aircraft thus become irrelevant, unless we explicitly mention
them in the property. This explains the relatively large reduction
obtained.
8 Discussion and Related Work
We first summarize some differences between symbolic
model checking for hardware circuits and for TCAS II. A
major focus of hardware verification is on concurrent systems
with complex control paths and often subtle concurrency
bugs, but their data paths are relatively simple. Forward
traversals usually perform much better, because the
BDDs tend to be small in their reachable state spaces. In
contrast, the major complexity of the TCAS II requirements
lies not in the concurrency among components, but in the
intricate influence of data values on the control paths. The
BDD for the transition relation tends to be huge and forward
traversals inefficient. Backward traversals usually perform
better by focusing on the property analyzed, and can be further
improved by exploiting the simple synchronization patterns
Our method of pruning backward traversals using invariants
is similar in spirit to the work on hardware verification by
Cabodi et al., who propose doing an approximate forward
traversal to compute a superset of the reachable states, which
is then used to prune backward traversals [?]. Their method
is more automatic, while the invariants we suggest take advantage
of the simple synchronization of the system. They
also independently propose disjunctive partitioning for synchronous
circuits [?]. Their method requires the designer to
come up with a partition manually, and we again exploit mutually
exclusive events.
In work also independent of ours, Heimdahl and Whalen [?]
use a dependency analysis technique similar to the one described
Section ??, but their motivation is to facilitate manual
review of the TCAS II requirements, rather than automatic
verification. As noted before, we gained relatively
large reduction because Other-Aircraft was not fully mod-
eled, and we suspect that in a complete system, the reduction
obtained by this exact analysis could be limited. However,
more reduction can be obtained if we forsake exactness. For
example, localization reduction [?] is one such technique,
which aggressively generates an abstracted model that may
not satisfy the property while the full model does. If the
model checker finds in the abstracted model a counterexample
that does not exist in the full model, it will automatically
refine the abstraction and iterate the process until either a
correct counterexample is found or the property is verified.
It would be interesting to see how well the techniques in this
paper scale with the system complexity. The natural way is
to try applying them to the rest of TCAS II. Unfortunately,
that part contains arithmetic operations, such as multiplica-
tion, that provably cannot be represented by small BDDs [?].
In a recent paper, we suggest coupling a decision procedure
for nonlinear arithmetic constraints with BDD-based model
checking to attack the problem [?]. More research is needed
to see whether this technique scales to large systems.
Acknowledgments
We thank Steve Burns, who observed the inefficiency of the
algorithm in Figure ?? and implemented the one in Figure ??
in SMV.
--R
Efficient implementation of a BDD package.
Case study: Applying formal methods to the Traffic Alert and Collision Avoidance System (TCAS) II.
Characterizing finite Kripke structures in propositional temporal logic.
Efficient state space pruning in symbolic backward traversal.
Combining constraint solving and symbolic model checking for a class of systems with non-linear con- straints
Automatic verification of finite-state concurrent systems using temporal logic specifications
Verification of the Futurebus
Verification of synchronous sequential machines based on symbolic execution.
Model checking graphical user interfaces using abstractions.
Statecharts: A visual formalism for complex systems.
The STATEMATE semantics of statecharts.
Completeness and consistency analysis of state-based require- ments
Reducing BDD size by exploiting functional dependencies.
New techniques for efficient verification with implicitly conjoined BDDs.
model checking based on forward state traversal.
Requirements specification for process-control systems
Symbolic Model Checking.
Formal verification of the Gigamax cache consistency protocol.
Automatic verification of a hydroelectric power plant.
Dynamic variable ordering for ordered binary decision diagrams.
Feasibility of model checking software requirements: A case study.
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Graph-based algorithms for Boolean function manipulation
Statecharts: A visual formalism for complex systems
Characterizing finite Kripke structures in propositional temporal logic
Verification of synchronous sequential machines based on symbolic execution
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication
Reducing BDD size by exploiting functional dependencies
Requirements Specification for Process-Control Systems
Computer-aided verification of coordinating processes
Completeness and Consistency in Hierarchical State-Based Requirements
The STATEMATE semantics of statecharts
Model checking large software specifications
model checking based on forward state traversal
Disjunctive partitioning and partial iterative squaring
Model checking graphical user interfaces using abstractions
Reduction and slicing of hierarchical state machines
Symbolic Model Checking
Efficient State Space Pruning in Symbolic Backward Traversal
Automatic Verification of a Hydroelectric Power Plant
Combining Constraint Solving and Symbolic Model Checking for a Class of a Systems with Non-linear Constraints
Efficient Model Checking by Automated Ordering of Transition Relation Partitions
VIS
--CTR
Gleb Naumovich, A conservative algorithm for computing the flow of permissions in Java programs, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Jamieson M. Cobleigh , Lori A. Clarke , Leon J. Osterweil, The right algorithm at the right time: comparing data flow analysis algorithms for finite state verification, Proceedings of the 23rd International Conference on Software Engineering, p.37-46, May 12-19, 2001, Toronto, Ontario, Canada
Jin Yang , Andreas Tiemeyer, Lazy symbolic model checking, Proceedings of the 37th conference on Design automation, p.35-38, June 05-09, 2000, Los Angeles, California, United States
Ji Y. Lee , Hye J. Kim , Kyo C. Kang, A real world object modeling method for creating simulation environment of real-time systems, ACM SIGPLAN Notices, v.35 n.10, p.93-104, Oct. 2000
C. Michael Overstreet, Improving the model development process: model testing: is it only a special case of software testing?, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California
Ofer Strichman, Accelerating Bounded Model Checking of Safety Properties, Formal Methods in System Design, v.24 n.1, p.5-24, January 2004
Jonathan Whittle, Formal approaches to systems analysis using UML: an overview, Advanced topics in database research vol. 1,
Chan , Richard J. Anderson , Paul Beame , David H. Jones , David Notkin , William E. Warner, Decoupling synchronization from local control for efficient symbolic model checking of statecharts, Proceedings of the 21st international conference on Software engineering, p.142-151, May 16-22, 1999, Los Angeles, California, United States
Chan , Richard J. Anderson , Paul Beame , David Notkin , David H. Jones , William E. Warner, Optimizing Symbolic Model Checking for Statecharts, IEEE Transactions on Software Engineering, v.27 n.2, p.170-190, February 2001
Guoqing , Shu Fengdi , Wang Min , Chen Weiqing, Requirements specifications checking of embedded real-time software, Journal of Computer Science and Technology, v.17 n.1, p.56-63, January 2002
Shoham Ben-David , Cindy Eisner , Daniel Geist , Yaron Wolfsthal, Model Checking at IBM, Formal Methods in System Design, v.22 n.2, p.101-108, March
Chan , Richard J. Anderson , Paul Beame , Steve Burns , Francesmary Modugno , David Notkin , Jon D. Reese, Model Checking Large Software Specifications, IEEE Transactions on Software Engineering, v.24 n.7, p.498-520, July 1998 | formal verification;reachability analysis;abstraction;system requirements specification;TCAS II;partitioned transition relation;binary decision diagrams;statecharts;symbolic model checking;RSML |
271802 | On the limit of control flow analysis for regression test selection. | Automated analyses for regression test selection (RTS) attempt to determine if a modified program, when run on a test t, will have the same behavior as an old version of the program run on t, but without running the new program on t. RTS analyses must confront a price/performance tradeoff: a more precise analysis might be able to eliminate more tests, but could take much longer to run.We focus on the application of control flow analysis and control flow coverage, relatively inexpensive analyses, to the RTS problem, considering how the precision of RTS algorithms can be affected by the type of coverage information collected. We define a strong optimality condition (edge-optimality) for RTS algorithms based on edge coverage that precisely captures when such an algorithm will report that re-testing is needed, when, in actuality, it is not. We reformulate Rothermel and Harrold's RTS algorithm and present three new algorithms that improve on it, culminating in an edge-optimal algorithm. Finally, we consider how path coverage can be used to improve the precision of RTS algorithms. | Introduction
The goal of regression test selection (RTS) analysis is to answer the
following question as inexpensively as possible:
Given test input t and programs old and new, does new(t)
have the same observable behavior as old(t)?
To appear, 1998 ACM/SIGSOFT International Symposium on
Software Testing and Analysis
Of course, it is desired to answer this question without running program
new on test t. RTS analysis uses static analysis of programs
old and new in combination with dynamic information (such as coverage
collected about the execution old(t) in order to
make this determination. An RTS algorithm either selects a test for
re-testing or eliminates the test.
Static analyses for RTS come in many varieties: some examine the
syntactic structure of a program [6]; others use control flow or control
dependence information [11, 12]; more ambitious analyses examine
the def-use chains or flow dependences of a program [9, 5].
Typically, each of these analyses is more precise than the previous,
but at a greater cost.
A safe (conservative) RTS analysis never eliminates a test t if new(t)
has different behavior than old(t). A safe algorithm may select
some test when it could have been eliminated.
We focus on the application of control flow analysis to safe regression
testing (from now on we will use CRTS to refer to "Control-
flow-based RTS"). Previous work has improved the precision of
CRTS analysis but left open the question of what the limit of such
analyses are. CRTS can be improved in two ways: by increasing the
precision of the analysis applied to the control flow graph representations
of programs old and new, or by increasing the precision of
the dynamic information recorded about the execution old(t). We
will address both issues and the interactions between them.
Our results are threefold:
ffl (Section 3) Building on recent work in CRTS by Rothermel
and Harrold [12], we show a strong relationship between
CRTS, deterministic finite state automata, and the intersection
of regular languages. We define the intersection graph
of two control flow graphs, which precisely captures the goal
of CRTS and forms the basis for a family of CRTS algorithms,
parameterized by what dynamic information is collected about
old(t).
ffl (Section 4) We consider the power of CRTS when the dynamic
information recorded about old(t) is edge coverage (i.e.,
whether or not each edge of old's control flow graph was
executed). We define a strong optimality condition (edge-
optimality) for CRTS algorithms based on edge coverage. We
then reformulate Rothermel and Harrold's CRTS algorithm
in terms of the intersection graph and present three new algorithms
that improve on it, culminating in an edge-optimal
algorithm. The first algorithm eliminates a test whenever
the Rothermel/Harrold algorithm does, and safely eliminates
more tests in general, at the same cost. The next two algorithms
are even more precise, but at greater computational
cost.
ffl (Section 5) By recording path coverage information about
rather than edge coverage, we can improve upon edge-
optimal algorithms. However, if path profiling is limited to
tracking paths of a bounded length (which is motivated by concerns
of efficiency), then an adversary will always be able to
choose a program new that will cause any CRTS algorithm
based on path coverage to fail.
Section 6 reviews related work and Section 7 summarizes the paper.
Background
We assume a standard imperative language such as C, C++, or Java
in which the control flow graph of a procedure P is completely determined
at compile time. In P's control flow graph G, each vertex
represents a basic block of instructions and each edge represents
a control transition between blocks. The translation of an abstract
syntax tree representation of a procedure into its control flow graph
representation is well known [1]. Since G is an executable representation
of P, we will talk about executing both P and G on a test
t.
We now define some graph terminology that will be useful in the
sequel.
s; x) be a directed control flow graph with vertices
a unique entry vertex s, from which all vertices
are reachable, and exit vertex x, which has no successors and is
reachable from all vertices. A vertex v is labelled with BB(v), the
code of the basic block it contains. Two different vertices may have
identical labels. It is often convenient to refer to a vertex by its label
and we will often do so, distinguishing vertices with identical labels
when necessary.
An edge source vertex v to target vertex w
via a directed edge labelled l. The outgoing edges of each vertex
are uniquely labeled. Labels are values (typically, true or false for
boolean predicates) that determine where control will transfer next
after execution of BB(v). 2 If a vertex v has only one outgoing edge,
its label is e, which is not shown in the figures.
Since the outgoing edges of a vertex are uniquely labelled, an edge
may also be represented by a pair (v; l), which we call a
control transition or transition, for short. The vertex succ(v; l) denotes
the vertex that is the l-successor of vertex v (if
A path in G is a sequence of edges
where the target vertex of e i is the source vertex of e i+1 for 1 -
path may be represented equivalently by an alternating
sequence of vertices and edge labels
is the source vertex of edge e i (for 1 - i - n), v n+1 is the
target vertex of e n , and l i is the label of edge e
Given a path p of n edges (and n [i] be the i th
vertex [i] be the i th edge label (1 - i - n).
2 The number of outgoing edges of vertex v and the labels on
these edges are uniquely defined by BB(v). Thus, different vertices
that have identical basic blocks will have the same number of out-going
edges with identical labels.
Paths beginning at a designated vertex (for our purposes, the entry
vertex s) are equivalently represented by a sequence of basic blocks
and labels (rather than a sequence of edges or vertices and labels):
A complete path is a path from s to x.
Figure
1 shows two programs P and P 0 and their corresponding
control flow graphs G and G 0 . For both G and G 0 , the entry vertex
is and the exit vertex is . The label of a vertex v denotes
its basic block BB(v). Graph G has one occurrence of basic block
C while graph G 0 has two occurrences of C. The graph G 00 is the
intersection graph of G and G 0 and is discussed next.
3 CRTS and the Intersection Graph
In control flow analysis, the graphical structure of a program is an-
alyzed, but the semantics of the statements in a program are not,
except to say whether or not two statements are textually identical.
This implies that CRTS algorithms must assume that every complete
path through a graph is potentially executable (even though
there may be unexecutable paths). Unexecutable paths cannot affect
the safety of CRTS algorithms, but may decrease their preci-
sion, just as they do in compiler optimization.
CRTS algorithms must be able to determine if two basic blocks
are semantically equivalent. Of course, this is undecidable in gen-
eral. Following Rothermel and Harrold, we use textual equivalence
of the code as a conservative approximation to semantic equiva-
lence, which is captured in the definition of equivalent vertices:
Two vertices v and w (from potentially different graphs) are equivalent
if the code of BB(v) is lexicographically identical to BB(w).
Let Equiv(v;w) be true iff v is equivalent to w. 3
Once we have equivalent vertices, we can extend equivalence to
paths, as follows: paths p and q are identical if p and q are the
same length, Equiv(p v [i]; q v [i]) is true for all i, and p l
all i. That is, p and q are identical words (over an alphabet of basic
blocks and labels).
The following simple definition (a restatement of that found in [11])
precisely captures the power of CRTS:
If graph G run on input t (denoted by G(t)) traverses complete
path p and graph G 0 contains complete path p 0 identical
to p, then G 0
(t) will traverse path p 0 and have the
same observable behavior as G(t).
The above definition translates trivially into the most precise and
computationally expensive CRTS algorithm: record the complete
execution path of G(t) (via code instrumentation that traces the
path [4]) and compare it to the control flow graph of G 0 to determine
if the path exists there. We will see later in Section 5 that any
algorithm that does not record the complete execution path of G(t)
can be forced, by an adversary choosing an appropriate graph G 0 ,
to select a test that could have been eliminated.
We observe that a control flow graph G may be viewed as a deterministic
finite automaton (DFA) with start state s and final state x
that accepts the language L(G), the set of all complete paths in G.
More precisely, a control flow graph G has a straightforward interpretation
as a DFA in which each vertex v in V corresponds to two
3 The exit vertex x can only be equivalent to other exit vertices
(i.e., vertices with no successors).
U,U
f
f
f
G" A,A
reject
accept
if A {
{ U }
if C { Y } else { Z }
} else {
if C { W }
U
f
f
Y Z
G' A
if A {
{ U }
if C { W }
U
f
f
f
G A
Figure
1: Example programs P and P 0 , their corresponding control flow graphs G and G 0 , and the intersection graph G 00 of G and G 0 .
states, v 1 and v 2 . These states are connected by a state transition
labelled by BB(v). Edges in E are also interpreted as
state transitions: an edge v ! l w is interpreted as a state transition
. The alphabet of the DFA is the union of all basic blocks
and all edge labels, s 1 is the start state, and x 2 is the final state. The
recognizes precisely the complete paths of G. Rather than
represent the control flow graph in this more verbose fashion, we
choose to present it in its traditional form but keep its DFA interpretation
in mind.
Given this insight, the CRTS question reduces to:
Is a complete path p from L(G) also in L(G 0 )?
), the paths for which re-testing is not
needed, and let D(G;G 0
), the paths for which re-testing
is needed.
A CRTS algorithm is optimal if, given any path p in I(G;G 0 ), the
algorithm reports that p is in I(G;G 0 ). A CRTS algorithm is safe
if, given any path p in D(G;G 0 ), the algorithm reports that p is in
To help reason about I(G;G 0 ) and D(G;G 0 ), we define a new graph
the intersection graph of
which also has a straightforward interpretation as a
DFA. 4 This graph can be efficiently constructed from G and G 0 .
The vertex set V 00 of G 00 is simply the cross product of V and V 0 ,
with two additional vertices:
We use the following relation to help define
is essentially an optimized version of a product automaton
of G and G 0 [7].
The edge set E 00 is defined in terms of ,! l and the Equiv relation.
not
) is the entry vertex of G 00 . We will restrict the vertex
and edge sets of G 00 to be the vertices and edges reachable from
). If no vertices (other than (s; s 0
or edges are reachable from
are not equivalent. A pair (v; v 0
) is reachable
from (s; s 0
there is a path p in G from s to v that is a prefix of a
path in I(G;G 0
reject represents the reject state, which corresponds
to paths in D(G;G 0
represents the accept
state, which corresponds to paths in I(G;G 0 ).
Figure
1 shows the intersection graph G 00 of the graphs G and G 0 in
the figure. We can see that there are two paths in I(G;G 0 ), corresponding
to the paths:
and
in G 00 . The corresponding paths in G are: [A; f ;C; f
Graph G 00 also shows that any path that begins
with the transition (A;t) is in D(G;G 0
Two straightforward results about the intersection graph G 00 will inform
the rest of the paper: A path p is in I(G;G 0 ) iff it is represented
by a path from (s; s 0 ) to accept in G 00 ; a path p is in D(G;G 0 ) iff it
is represented by a path from (s; s 0 ) to reject in G 00 . Of course, every
complete path p in G is either in I(G;G 0 ) or D(G;G 0 ). More
Theorem 1 Let G 00 be the intersection graph of graphs G and G 0 .
Path
from G is in I(G;G 0 ) iff
is in G 00 .
Theorem 2 Let G 00 be the intersection graph of graphs G and G 0 .
Path
from G is in D(G;G 0 ) iff there exists n) such that
is in G 00 .
Figure
shows how the intersection graph of graphs G and G 0 is
computed via a synchronous depth-first search of both graphs. The
procedure DFS is always called with equivalent vertices v and v 0 .
If (v; v 0 ) is already in V 00 , this pair has been visited before and the
procedure returns. Otherwise, (v; v 0 ) is inserted into V 00 and each
its corresponding edge is considered in
turn. 5 Edges are appropriately inserted into E 00 to reflect whether
or not vertices w and w 0 are equivalent, and whether or not w is the
exit vertex of G. The algorithm recurses only when w and w 0 are
equivalent and w is not the exit vertex of G. The algorithm also
computes the set of vertices V 00
accept from which accept is reachable
in G 00 , which will be used later.
The worst-case time complexity of the algorithm is O(jEj \Delta jE 0 j).
Note that it is not necessary to store the relation E 00 explicitly, since
it can be derived on demand from V 00 , E and E 0 . Thus, the space
complexity for storing the intersection graph (as well as V 00
accept ) is
in the worst case.
4 CRTS Using Edge Coverage
What is the limit of CRTS given that the dynamic information collected
about G(t) is edge coverage? Consider a complete path p
representing the execution path of G(t) and the set of edges E p of
G that it covers. There may be another complete path q in G, distinct
from p, such that E represent the set of paths
(including p) whose edge sets are identical to E p .
To determine whether or not G 0 needs retesting, a CRTS algorithm
using edge coverage must consider (at least implicitly) all the paths
in P p . If all of these paths are members of I(G;G 0 ) then the CRTS
algorithm can and should eliminate the test that generated path p.
However, if even one of the paths in P p is in D(G;G 0 ) then the
algorithm must select the test in order to be safe.
Given this insight, we can now define what it means for a CRTS
algorithm to be edge-optimal:
A CRTS algorithm is edge-optimal if for any path p
such that P p ' I(G;G 0
), the algorithm reports that p is
in I(G;G 0
5 Note that if v and v 0 are equivalent then w must
be defined since BB(v 0 ) is identical to BB(v).
6 Note that no such paths can exist if G is acyclic. In this case,
each complete path has a different set of edges than all other complete
paths.
accept := facceptg
procedure
begin
for each edge do
else
else
accept then
accept [f(v;v 0 )g
ni
od
Figure
2: Constructing the intersection graph of G and G 0 via
a synchronous depth-first search of the two graphs. The algorithm
also determines the set of vertices V 00
accept from which
accept is reachable.
accept
reject
accept
reject
accept
reject
Rothermel/Harrold algorithm Partial-reachability algorithm Full-reachability algorithm
accept
reject
Valid-reachability algorithm
Figure
3: The four edge-based CRTS algorithms, summarized pictorially with the intersection graph. The dotted outline represents V 00
accept , the
vertices of G 00 from which accept is reachable.
Algorithm Time Space Precision Edge-optimal?
Rothermel/Harrold O(jEj \Delta (jE
Partial-reachability O(jEj \Delta (jE
Full-reachability O(jEj \Delta jE
Table
1: Comparison of four edge-based CRTS algorithms.
We first present the Rothermel/Harrold (RH) algorithm, restated in
terms of the intersection graph. We then present three new algo-
rithms, culminating in an edge-optimal algorithm. Figure 3 illustrates
what the RH algorithm and each of the four algorithms does,
using the intersection graph. 7 Each picture shows the start vertex
states reject and accept. The dotted outline represents
accept , the vertices of G 00 from which accept is reachable.
ffl The RH algorithm detects whether or not E p covers an edge incident
to reject. If it does not, then path p must be in I(G;G 0
ffl The partial-reachability algorithm detects whether or not E p
covers a path in the intersection graph from an edge leaving
accept to the reject vertex. Again, if no such path exists then
p is in I(G;G 0 ). A surprising result is that partial-reachability
of reject can be determined with time and space complexity
equivalent to the RH algorithm. This algorithm is more precise
than the RH algorithm since it may be the case that E p contains
an edge incident to reject but does not cover a partial path from
a vertex in V 00
accept to reject.
ffl The full-reachability algorithm determines whether or not E p
covers a path from (s; s 0 ) to reject. If not, then p is in I(G;G 0 ).
This algorithm is more precise than the partial-reachability al-
gorithm, but at a greater cost. However, it is still not edge-
optimal.
ffl The valid-reachability algorithm makes use of a partial order
v on edges in G to rule out certain "invalid" paths. We show
that if P cannot cover a valid reaching
path to reject from (s; s ), yielding an edge-optimal algorithm.
Table
1 summarizes the time and space complexity for the four al-
gorithms. T represents the set of tests on which G has been run.
All edge-based CRTS algorithms incur a storage cost of O(jEj \Delta jT
for the edge coverage information stored for each test in T , which
we factor out when discussing the space complexity of these algorithms
7 If s is not equivalent to s 0 , then I(G;G 0 ) is empty. We assume
that all four algorithms initially check this simple condition before
proceeding.
4.1 The Rothermel-Harrold Algorithm
We now present the RH algorithm in terms of the intersection graph
The RH algorithm first computes the set D of control
transitions incident to reject (using a synchronous depth-first
search of graphs G and G 0 similar to that in Figure 2):
Given D and an edge set E p , the RH algorithm then operates as
must be in I(G;G 0
since it contains
no transition from D, which is required for p to be in D(G;G 0
Otherwise, conservatively assume that p is in D(G;G 0
Consider the intersection graph of Figure 1. For this graph,
)g. Since every path from A to X in graph G contains
one of these transitions, the RH algorithm will require all tests to be
rerun on G 0 . However, in this example, for any path p in I(G;G 0 ),
so the RH algorithm is not edge-optimal. Consider
such a path
The transitions of G 00 covered by E p are shown as bold edges in
Figure
1. There is no complete path other than p that covers exactly
the transitions (A; f ), (C;t) and (W;e).
The time and space complexity to compute D is clearly the same as
that for the depth-first search algorithm of Figure 2. To compute, for
all tests t in a set of tests T , whether or not the set of edges covered
by G(t) contains a transition from D, takes O(jEj \Delta jT time. Thus,
the RH algorithm has an overall running time of O(jEj \Delta (jE
and space complexity of O(jV j \Delta jV 0 j).
Rothermel and Harrold show that if G and G 0 do not have a
"multiply-visited vertex" then their algorithm will never report that
p is in D(G;G 0
actually is in I(G;G 0
). This means that
their algorithm is optimal (and thus edge-optimal) for this class of
graphs. Stated in terms of the intersection graph G 00 , a vertex v in G
is a "multiply-visited vertex" if:
So in
Figure
vertex C of graph G is a multiply-visited vertex.
Rothermel and Harrold ran their algorithm on a set of seven small
A,A
U,U
f
f
f
f
G"
reject
accept
A
U
f
f
Y
G'
U
f
f
f
G A
if A {
{ U }
if C { Y }
} else {
if C { W }
if A {
{ U }
if C { W }
Figure
4: An example that shows that the partial-reachability algorithm is not edge-optimal.
programs (141-512 lines of code, 132 modified versions) and one
larger program (49,000 lines of code, 5 modified versions), and
found that the multiply-visited vertex condition did not occur for
these programs and their versions [12]. Further experimentation is
clearly needed on larger and more diverse sets of programs to see
how often this condition arises.
4.2 The Partial-reachability Algorithm
Let us reconsider the example of Figure 1. The dotted outline in
graph G 00 shows the set V 00
accept . The only transition leaving this set
is (A;t). Any path leading to reject must include this transition.
Thus, if this transition is not in E p then p must be in I(G;G 0 ), as is
the case with path
which has E (C;t);(W;e)g.
Consider the projection of E p onto the edge set of G
and the graph G 00
results (the edges of E 00
are shown
in bold in Figure 1). It is straightforward to see that, in general, for
any edge v 00 ! w 00 in G 00
reject must be reachable
from w 00 in G 00
. Therefore, for an edge
in V 00
accept and w 00 is not in V 00
accept it must be the case that reject is
reachable from w 00 .
This observation leads to the partial-reachability algorithm which
has time and space complexity identical to that of the RH algorithm,
yet is more precise. This algorithm does not require construction
of G 00
p , but is able to determine whether or not reject is partially-
reachable from an edge leaving V 00
accept .
Similar to the RH algorithm, this algorithm first computes a set
D reject of transitions in G using the intersection
accept g
The set D reject contains transitions in G that transfer control out of
accept . The algorithm then operates as follows: If /then p is in I(G;G 0 ), since p must contain a transition from D reject
in order to be in D(G;G 0 ). Otherwise, conservatively assume that p
is in D(G;G 0
It is easy to see that the partial-reachability algorithm subsumes the
RH algorithm, since whenever reject is not empty,
will not be empty. Stated another way, whenever the RH algorithm
reports that p is in I(G;G 0 ), the partial-reachability algorithm will
report the same.
As shown in Figure 2, the set V 00
accept can be determined during
construction of the intersection graph, in O(jEj \Delta jE 0
space. To compute D reject takes O(jEj
simply requires visiting every edge e 00 in E 00 to determine if e 00
leaves
accept . If so, then the transition e in G corresponding to
e 00 is added to D reject . Once D reject has been computed, the rest of
the algorithm is identical to the RH algorithm: for each test in T ,
check whether or not the set of edges covered by the test has an edge
in D reject . Thus, the time and space complexity of this algorithm is
identical to the RH algorithm.
4.3 The Full-reachability Algorithm
Figure
4 shows that the partial-reachability algorithm is not edge-
optimal. In this example, the intersection graph G 00 has
Thus, for the path which is in I(G;G 0
and for which P p ' I(G;G 0
both the RH algorithm and partial-
reachability algorithm will fail to report that p is in I(G;G 0
since
transition (C;t) is covered by path p. Note, however, that in G 00
p the
reject vertex is not reachable from (s; s 0 ).
In general, either reject or accept must be reachable from (s; s 0 )
in G 00
. The full-reachability algorithm is simple: If reject is not
reachable in G 00
then p is in I(G;G 0
Otherwise, conservatively
assume that p is in D(G;G 0
Consider graph G in Figure 4. Any complete path in G containing
U
while A { }
G
A
f
U U
if A {
if A {
while A { }
} else { Y }
f
f
U,U
f
f
f
A 3
G'
Y
U
reject
accept
Figure
5: An example that shows that the full-reachability algorithm is not edge-optimal.
the transition additionally, does not contain
the transition (A;t). Therefore, for any such path p, vertex reject is
not reachable from vertex (A;A) in G 00
.
The DFS algorithm in Figure 2 can be easily modified to compute
the reachability of reject in G 00
p , but must be run for each test in T ,
resulting in an overall running time of O(jEj \Delta jE j). The space
complexity remains the same as before.
4.4 The Valid-reachability Algorithm: An
Edge-optimal Algorithm
As shown in Figure 5, the full-reachability algorithm is not edge-
optimal. Consider the path
in graph G, which is in I(G;G 0
) and has coverage
)g. Every path in G that covers exactly these
transitions is in I(G;G 0
). Nonetheless, the projection of E p onto G 00
yields a graph in which reject is reachable from (U;U) via the path
However, notice that for any path in graph G that includes both the
transitions (A;t) and (A; f ), the first occurrence of the transition
(A;t) in the path must occur before the first occurrence of (A; f ).
Therefore, all paths in P p must have this property, since by definition
they cover (A;t), (A; f ), and (U;e). While the set of transitions
in the path by which reject is reachable in G 00
includes (U;e) and
does not include (A;t) before (A; f ). So, this path cannot
be in P p and should be ignored.
The problem then is that the full-reachability algorithm considers
paths that are not in P p but reach reject in G 00
. By refining the
notion of reachability, we arrive at an edge-optimal algorithm. We
define a partial order on the edges of graph G as follows:
containing both edges
e and f , the first instance of e in p precedes the first instance
of f in p.
We leave it to the reader to prove that v is indeed a partial order
(it is anti-symmetric, transitive, and reflexive). An equivalent but
constructive definition of v follows:
dominates f 8 or ( f is reachable from e, and e
is not reachable from f ).
The v relation for graph G in Figure 5 is (U;e) v (A;t) v (A; f ).
The valid-reachability algorithm is based on the following observa-
tion: If a path q contains a transition f 2 E p but does not contain
a transition e 2 E p such that e v f in G, then any path with q as a
prefix cannot be a member of P p . We say that such a path does not
respect v.
The valid-reachability algorithm first checks if reject is reachable
from (s; s 0 ) in G 00
. If not, then p is in I(G;G 0 ), as before. If (s; s 0 ) is
reachable, the algorithm computes R 00 , the set of transitions in G 00
that are reachable from (s; s 0
) and from which reject is reachable. It
also computes the projection R of these transitions onto G. That is,
R is a subset of E p . If E p contains edges e and f such that
e 62 R, f 2 R and e v f , then the algorithm outputs that p is in
). Otherwise, the algorithm conservatively assumes that p is
in D(G;G 0
It is straightforward to show that the valid-reachability algorithm is
safe. The following theorem shows that it is also edge-optimal:
Theorem 3 Given graphs G and G 0 and their intersection graph
G 00 . If P p ' I(G;G 0 ) for any complete path p in G, then either
ffl reject is not reachable from (s; s 0
p , or
Proof: If reject is not reachable in G 00
then we are done. Instead,
suppose that reject is reachable from (s; s 0 ) in G 00
. Furthermore,
assume that for all f 2 R and e Given
these assumptions, we will show that there is a complete path q in
) such that E contradicting our initial assumption
that all paths with edge coverage equal to E p are in I(G;G 0
There are two parts to the proof: 1. show that there is a path q 1 in G
from entry to v that covers only transitions from R, respects v and
8 An edge e dominates edge f in graph G if every path from s to
f in G contains e.
U
while A { }
G
A
f
U
f
f
f
G"
U,U
f
f
f
A 3
U
U
if A {
if A {
while A { }
V2reject
accept
Figure
An example for which any CRTS algorithm based on edge coverage cannot distinguish a path in I(G;G 0
) from a path in D(G;G ).
induces a path in G 00 from (s; s ) to reject; 2. show that there is a
path q 2 from v to x in G that covers the transitions in
does not cover a transition outside E p . The concatenation of paths
yields a path q in D(G;G 0 ) such that E
The existence of path q 1 follows from the closure property of R
with respect to v (if f 2 R, e 2 E p and e v f then e 2 R), and the
fact that R is the projection of R 00 , the transitions by which reject is
reachable from (s; s 0 ) in G 00
.
We now show the existence of path q 2 . Let e be the last edge in
path q 1 . Since E p is the edge coverage of a complete path p, it
follows that for all edges e and f in E p , either f is reachable from e
in G via transitions in E p or e is reachable from f via transitions in
. Since the path q 1 respects v, it also follows that for all edges
f in cannot be related by v. In
the former case, f is reachable from e via transitions in E p . In the
latter case, edges e and f are not related by v, so it follows that e
and f must both be reachable from the other via transitions in E p ,
completing our proof.
The time complexity of the valid-reachability algorithm is O(jEj \Delta
j). The algorithm requires, for each test in T , the construction
of G 00
p and the set R, which takes time O(jEj \Delta jE 0 j), dominating
all other steps in the algorithm. Using an extended version of the
Lengauer/Tarjan immediate dominator algorithm [8], the immediate
v relation for G can be computed in near-linear time and space
in the size of G. To determine whether or not the set of edges R
is closed with respect to E p and v requires the following steps: 1.
projecting to create v p , an O(E) operation; 2. visiting
each immediate relation e v p f to check if e 62 R and f 2 R. As two
constant-time set membership operations are performed for each
immediate v p relation, of which there are O(E), this step takes
O(E) time. The space complexity of the valid-reachability algorithm
remains at O(jV j \Delta jV 0 j).
5 CRTS Using Path Coverage
Figure
6 shows that any CRTS algorithm based on edge coverage
can be forced to make an incorrect (but safe) decision. It presents
two programs, their graphs G and G 0 , and their intersection graph
is in I(G;G 0 ). The path p has )g. This is
exactly the same set of edges covered by any path in D(G;G 0 ), such
as:
Thus, it is impossible to determine whether a path in I(G;G ) or in
produced the edge set E p .
We consider how the path profiling technique of Am-
mons/Ball/Larus (ABL) [2] applied to the graphs in Figure 6
can separate the paths p and q. The ABL algorithm decomposes
a control flow graph into acyclic paths based on the backedges
identified by a depth-first search from s. Suppose that v ! w is a
backedge. The ABL decomposition yields four classes of paths:
(1) A path from s to x.
(2) A path from s to v, ending with backedge v ! w.
(3) A path from w to v (after execution of backedge v !w) ending
with execution of backedge v ! w.
After execution of backedge v ! w, a path from w to x.
Graph G has backedge A ! t A. Applying the ABL decomposition
to graph G in Figure 6 yields a total of four paths (corresponding to
the four types listed above):
The ABL algorithm inserts instrumentation into program P to track
whether or not each of these four paths is covered in an execution.
Recall the paths p and q that got edge-based CRTS into trouble.
Path p is composed of the paths p 2 followed by p 4 , so ABL will
record that only these two paths are covered when p executes. On
the other hand, the path q is composed of p 2 , followed by p 3 , followed
by p 4 . Thus, for this example where edge coverage could not
distinguish the two paths, the ABL path coverage does.
As mentioned in the introduction, an adversary can create a graph
G 0 such that any control-flow-based RTS algorithm that records less
than the complete path executed through G will be unable to distinguish
a path in I(G;G 0
) from a path in D(G;G 0
). This is only true
if G contains cycles, as it does in our example
In the example from Figure 6, we can defeat the ABL path coverage
by adding another if-then conditional (with basic block
the outermost conditional in program P 0 . Now, the path
is in I(G;G 0 ) and the path in which (A;t) occurs one more time,
is in D(G;G 0 ). However, both these paths cover exactly the set of
they will not be distinguished unless
longer paths are tracked. For any cutoff chosen, we can add another
level of nesting and achieve the same effect.
6 Related Work
Rothermel and Harrold define a framework for comparing different
regression test selection methods [11], based on four characteristics
ffl Inclusiveness, the ability to choose modification revealing
tests (paths in D(G;G 0
ffl Precision, the ability to eliminate or exclude tests that will not
reveal behavioral differences (paths in I(G;G 0
ffl Efficiency, the space and time requirements of the method, and
ffl Generality, the applicability of the method to different classes
of languages, modifications, etc.
Our approach shares many similarities with the RH algorithm. The
three reachability algorithms are based on control flow analysis and
edge coverage. The partial-reachability algorithm is just as inclusive
as the RH algorithm but is more precise with equivalent effi-
ciency. The full-reachability and valid-reachability algorithms are
even more precise, but at a greater cost. We have not yet considered
how to generalize our algorithms to handle interprocedural control
flow, as they have done.
Rothermel shows that the problem of determining whether or not
a new program is "modification-traversing" with respect to an old
program and a test t is PSPACE-hard [10]. Intuitively, this is because
the problem involves tracing the paths that the programs execute
and the paths can have size exponential in the input program
size (or worse). Of course, given a complete path through an old
program and a new program, it is a linear-time decision procedure
to determine if the new program contains the path. However, this
defines away the real problem: that the size of the path can be un-
bounded. We have considered the best a CRTS algorithm can do
when the amount of information recorded about a program's execution
is O(E) (edge coverage) or exponential in the number of edges
(ABL path coverage).
Summary
We have formalized control-flow-based regression test selection using
finite automata theory and the intersection graph. The partial-
reachability algorithm has time and space complexity equivalent to
the best previously known algorithm, but is more precise. In ad-
dition, we defined a strong optimality condition for edge-based regression
test selection algorithms and demonstrated an algorithm
(valid-reachability) that is edge-optimal. Finally, we considered
how path coverage can be used to further improve regression test
selection.
A crucial question on which the practical relevance of our work
hinges is whether or not the "multiply-visited" vertex condition defined
by Rothermel and Harrold occurs in practice. For versions of
programs that do not have this condition, the RH algorithm is op-
timal. When this condition does occur, as we have shown, the RH
algorithm is not even edge-optimal. We plan to analyze the extensive
version control repositories of systems in Lucent [3] to address
this question.
Acknowledgements
Thanks to Mooly Sagiv and Patrice Godefroid for their suggestions
pertaining to finite state theory. Thanks also to Glenn Bruns, Mary
Jean Harrold, Gregg Rothermel, Mike Siff, Mark Staskauskas and
Peter Mataga for their comments.
--R
Exploiting hardware performance counters with flow and context sensitive profiling.
If your version control system could talk.
Optimally profiling and tracing pro- grams
Incremental program testing using program dependence graphs.
A system for selective regression testing.
Introduction to Automata The- ory
A fast algorithm for finding dominators in a flow graph.
Using data flow analysis for regression testing.
Efficient, Effective Regression Testing Using Safe Test Selection Techniques.
Analyzing regression test selection techniques.
--TR
Compilers: principles, techniques, and tools
Incremental program testing using program dependence graphs
Optimally profiling and tracing programs
Analyzing Regression Test Selection Techniques
A safe, efficient regression test selection technique
TestTube
Exploiting hardware performance counters with flow and context sensitive profiling
A fast algorithm for finding dominators in a flowgraph
Introduction To Automata Theory, Languages, And Computation
Efficient, effective regression testing using safe test selection techniques
--CTR
Amitabh Srivastava , Jay Thiagarajan, Effectively prioritizing tests in development environment, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Guoqing Xu, A regression tests selection technique for aspect-oriented programs, Proceedings of the 2nd workshop on Testing aspect-oriented programs, p.15-20, July 20-20, 2006, Portland, Maine
Mary Jean Harrold , Gregg Rothermel , Rui Wu , Liu Yi, An empirical investigation of program spectra, ACM SIGPLAN Notices, v.33 n.7, p.83-90, July 1998
Alessandro Orso , Nanjuan Shi , Mary Jean Harrold, Scaling regression testing to large software systems, ACM SIGSOFT Software Engineering Notes, v.29 n.6, November 2004
Gregg Rothermel , Roland J. Untch , Chengyun Chu, Prioritizing Test Cases For Regression Testing, IEEE Transactions on Software Engineering, v.27 n.10, p.929-948, October 2001
Mary Jean Harrold , James A. Jones , Tongyu Li , Donglin Liang , Alessandro Orso , Maikel Pennings , Saurabh Sinha , S. Alexander Spoon , Ashish Gujarathi, Regression test selection for Java software, ACM SIGPLAN Notices, v.36 n.11, p.312-326, 11/01/2001
Gregg Rothermel , Mary Jean Harrold, Empirical Studies of a Safe Regression Test Selection Technique, IEEE Transactions on Software Engineering, v.24 n.6, p.401-419, June 1998
John Bible , Gregg Rothermel , David S. Rosenblum, A comparative study of coarse- and fine-grained safe regression test-selection techniques, ACM Transactions on Software Engineering and Methodology (TOSEM), v.10 n.2, p.149-183, April 2001
Mary Jean Harrold , David Rosenblum , Gregg Rothermel , Elaine Weyuker, Empirical Studies of a Prediction Model for Regression Test Selection, IEEE Transactions on Software Engineering, v.27 n.3, p.248-263, March 2001
Jianjun Zhao , Tao Xie , Nan Li, Towards regression test selection for AspectJ programs, Proceedings of the 2nd workshop on Testing aspect-oriented programs, p.21-26, July 20-20, 2006, Portland, Maine
Guoqing Xu , Atanas Rountev, Regression Test Selection for AspectJ Software, Proceedings of the 29th International Conference on Software Engineering, p.65-74, May 20-26, 2007
Nancy J. Wahl, An overview of regression testing, ACM SIGSOFT Software Engineering Notes, v.24 n.1, p.69-73, Jan. 1999 | profiling;coverage;control flow analysis;regression testing |
271804 | All-du-path coverage for parallel programs. | One significant challenge in bringing the power of parallel machines to application programmers is providing them with a suite of software tools similar to the tools that sequential programmers currently utilize. In particular, automatic or semi-automatic testing tools for parallel programs are lacking. This paper describes our work in automatic generation of all-du-paths for testing parallel programs. Our goal is to demonstrate that, with some extension, sequential test data adequacy criteria are still applicable to parallel program testing. The concepts and algorithms in this paper have been incorporated as the foundation of our DELaware PArallel Software Testing Aid, della pasta. | Introduction
Recent trends in computer architecture and computer networks suggest
that parallelism will pervade workstations, personal comput-
ers, and network clusters, causing parallelism to become available
to more than just the users of traditional supercomputers. Experience
with using parallelizing compilers and automatic parallelization
tools has shown that these tools are often limited by the underlying
sequential nature of the original program; explicit parallel
programming by the user replacing sequential algorithms by parallel
algorithms is often needed to take utmost advantage of these modern
systems. A major obstacle to users in ensuring the correctness and
reliability of their parallel software is the current lack of software
testing tools for this paradigm of programming.
Researchers have studied issues regarding the analysis and testing of
concurrent programs that use rendezvous communication. A known
hurdle for applying traditional testing approaches to testing parallel
Prepared through collaborative participation in the Advanced
Telecommunications/Information Distribution Research Program
(ATIRP) Consortium sponsored by the U.S. Army Research Laboratory
under Cooperative Agreement DAAL01-96-2-0002.
programs is the nondeterministic nature of these programs. Some
researchers have focused on solving this problem[13, 15], while
others propose state-oriented program testing criteria for testing
concurrent programs[14, 10]. Our hypothesis is that, with some
extension, sequential test data adequacy criteria are still applicable
to parallel program testing of various models of communication.
Although many new parallel programming languages and libraries
have been proposed to generate and manage multiple processes
executing simultaneously on multiple processors, they can be categorized
by their synchronization and communication mechanisms.
Message passing parallel programming accomplishes communication
and synchronization through explicit sending and receiving of
messages between processes. Message passing operations can be
blocking or nonblocking. Shared memory parallel programming
uses shared variables for communication, and event synchronization
operations.
In this paper, we focus on the applicability of one of the major testing
criteria, all-du-path testing[16], to both shared memory and message
passing parallel programming. In particular, we examine the problem
of finding all-du-path coverage for testing a parallel program.
The ultimate goal is to be able to generate test cases automatically
for testing programs adequately according to the all-du-path criteria.
Based on this criterion, all define-use associations in a program will
be covered by at least one test case. The general procedure for finding
a du-pair coverage begins with finding du-pairs in a program.
For each du-pair, a path is then generated to cover the specific du-
Finally, test data for testing the path is produced [2][7]. This
testing procedure has been well established for sequential programs;
however, there is currently no known method for determining the
all-du-path coverage for parallel programs. Moreover, the issues
to be addressed toward developing such algorithms are not well
defined.
We present our algorithms for shared memory parallel programs,
and then discuss the modifications necessary for the message passing
paradigm. We have been building a testing tool for parallel
software, the Delaware Parallel Software Testing Aid, called della
pasta, to illustrate the effectiveness and usefulness of our tech-
niques. della pasta takes a shared memory parallel program as
input, and interactively allows the user to visually examine the all-
du-path test coverages, pose queries about the various test coverages,
and modify the test coverage paths as desired. In our earlier paper,
we focused strictly on the all-du-path finding algorithm[18].
We begin with a description of the graph representation of a parallel
program used in our work. We then describe our testing paradigm
and how we cope with the nondeterministic nature of parallel programs
during the testing process. We discuss the major problems
in providing all-du-path coverage for shared memory parallel pro-
grams, and a set of conditions to be used in judging the effectiveness
of all-du-path testing algorithms. Current approaches to
all-du-path coverage for sequential programs of closest relevance
to our work are then discussed. We present our algorithm for finding
an all-du-path coverage for shared memory parallel programs,
which combines and extends previous methods for sequential pro-
grams. Modification of our data structures and algorithms for other
parallel paradigms is discussed followed by the description of the
della pasta tool. Finally, a summary of contributions and future
directions are stated.
Model and Notation
The parallel program model that we use in this paper consists of
multiple threads of control that can be executed simultaneously. A
thread is an independent sequence of execution within a parallel
program, (i.e., a subprocess of the parallel process, where a process
is a program in execution). The communication between two threads
is achieved through shared variables; the synchronization between
two threads is achieved by calling post and wait system calls; and
thread creation is achieved by calling the pthread create system call.
We assume that the execution environment supports maximum par-
allelism. In other words, each thread is executed in parallel independently
until a wait node is reached. This thread halts until a
matching post is executed. The execution of post always succeeds
without waiting for any other program statements.
Formally, a shared memory parallel program can be defined as
threads. Moreover, T 1 is defined as the manager thread while
all other threads are defined as worker threads, which are created
when a pthread create() system call is issued.T
post
wait
pthread_
create
loop
loop
begin
begin
d:m=x+y
u:z=3*m
Figure
1: Example of a PPFG
To represent the control flow of a parallel program, a Parallel Program
Flow Graph (PPFG) is defined to be a graph
which V is the set of nodes representing statements in the program,
and E consists of three sets of edges . The set E I
consists of intra-thread control flow edges (m i , n
are nodes in thread T i . The set E S consists of synchronization edges
(post post i is a post statement in thread T i , wait j is a
wait statement in thread T j , and i 6= j. The set E T consists of thread
creation edges (n i , n j ), where n i is a call statement in thread T i to
the pthread create() function, and n j is the first statement in thread
We define a path P i (n i u 1
simply P i , within a thread T i to be an
alternating sequence of nodes and intra-thread edges
or simply a sequence of nodes n i
, where
uw is the unique node index in a unique numbering of the nodes
and edges in the control flow graph of the thread T i (e.g., a reverse
postorder numbering).
Figure
1 illustrates a PPFG. All solid edges are intra-thread edges.
Edges in E S and E T are represented by dotted edges. This diagram
also shows a define node d of variable m, i.e., y, and a use
node u, i.e., m. The sequence begin \Gamma
is a path.
A du-pair is a triplet ( var,
is the u th node in
thread T i in the unique numbering of the nodes in thread T i , and the
program variable var is defined in the statement represented by node
while the program variable var is referenced in the v th node in
the unique ordering of nodes in thread T j .
In a sequential program or a single thread T i of a parallel program,
we say that a node is covered by a path, denoted n 2 p P, if there
exists a node n s in the path such that We say that a node
in a parallel program is covered by a set of paths
respectively, or simply
We represent the set of matching posts of a wait node as
and the set of matching waits of a post node as
g. We use the symbol "OE" to represent the
relation between the completion times of instances of two statement
nodes. We say a if an instance of the node a completes execution
before an instance of the node b.
Finally, the problem of finding all-du-path coverage for testing a
shared memory parallel program can be stated as: Given a shared
memory parallel program,
v ), in PROG, find a set of paths
in threads T 1 ,
that covers the du-pair ( var,
v ), such
that n u OE n v . 1
3 Nondeterminism and the Testing Process
Nondeterminism is demonstrated by running the same program with
the same input and observing different behaviors, i.e., different
sequences of statements being executed. Nondeterminism makes it
difficult to reproduce a test run, or replay an execution for debugging.
It also implies that a given test data set may not actually force the
intended path to be covered during a particular testing run.
One way to deal with nondeterminism is to perform a controlled
execution of the program, by having a separate execution control
We focus on finding du-pairs with the define and use in different
threads; du-pairs within the same thread are a subcase.
mechanism that ensures a given sequence of execution. We advocate
controlled execution for reproducing a test when unexpected results
are produced from a test, but we have not taken this approach to
the problem of automatically generating and executing test cases to
expose errors. Instead, we advocate temporal testing for this stage
of testing.
We briefly describe our temporal testing paradigm here, and refer
the reader to [19] for a more detailed description. Temporal testing
alters the scheduled execution time of program segments in order to
detect synchronization errors. Formally, a program test case TC is a
2-tuple (PR OG, I ) where I is the input data to the program PR OG,
whereas a temporal test case TTC is a 3-tuple (PR OG, I , D) where
the third component, referred to as timing changes, is a parameter
for altering the execution time of program segments. Based on D,
the scheduled execution time of certain synchronization instructions
n, represented as t(n), will be changed for each temporal test and
the behavior of the program PR OG will be observed.
Temporal testing is used in conjunction with path testing. For ex-
ample, temporal all-du-path testing can be implemented by locating
delay points along the du-paths being tested. The goal is to alter the
scheduled execution time of all process creation and synchronization
events along the du-paths. Delayed execution at these delay
points is achieved by instrumenting the program with dummy computation
statements. A testing tool is used to automatically generate
and execute the temporal test cases. Similarly, new temporal testing
criteria can be created by extending other structural testing criteria.
With the temporal testing approach, the testing process is viewed as
occurring as follows:
(1) Generate all-du-paths statically.
(2) Execute the program multiple times without considering any
possible timing changes.
(3) Examine the trace results. If the trace results indicate that different
paths were in fact executed, it is a strong indication that a
synchronization error has occurred and the du-path expected to be
covered may provide some clue about the probable cause. Controlled
execution may be used to reproduce the test. However, even
if the same du-path was covered in multiple execution runs, temporal
testing should still be performed.
Generate temporal test cases with respect to the du-paths.
Perform temporal testing automatically.
Examine the results.
In this paper, we focus on the first step, i.e., developing an algorithm
to find all-du-paths for shared memory parallel programs. The
results of this paper can be used for generating temporal test cases
with respect to the all-du-path coverage criterion. It should be
noted that it is possible that the path we want to cover is not executed
during a testing run due to nondeterminism, because we are not using
controlled execution; instead, we use automatic multiple executions
with different temporal testings to decrease the chances that the
intended path will not be covered.
All-du-path Coverage
In this section, we use some simplified examples to demonstrate
some of the inherent problems to be addressed in finding all-du-
paths in parallel programs. This list is not necessarily exhaustive,
but instead meant to illustrate the complexity of the problem of
automatically generating all-du-paths for parallel programs.
Figure
contains two threads, the manager thread and a worker
PATH COVERAGE:
1:
3:loop
4:if
15:wait
14:y=x
13:wait
12:if
10:
16:end
begin
begin
2:pcreate
8:end
5:x=3
6:post
7:post
manager
worker
Figure
2: Du-pair coverage may cause an infinite wait.
thread. This figure demonstrates a path coverage that indeed covers
the du-pair, but does not cover both the post and the wait of a
matching post and wait. If the post is covered and not a matching
wait, the program will execute to completion, despite the fact that
the synchronization is not covered completely. However, if the wait
is covered and not a matching post, then the program will hang
with the particular test case. In this example, the worker thread
may not complete execution, whereas the manager thread will
terminate successfully. The generated path will cause the loop in the
manager thread to iterate only once, while the loop in the worker
thread will iterate twice. This shows how the inconsistency in the
number of loop iterations may cause one thread to wait infinitely. In
addition, branch selection at an if node can also influence whether
or not all threads will terminate successfully.
PATH COVERAGE:
15:post
2:pcreate
worker
4:if 12:if
6:post
5:y=x+3 13:wait
3:loop
1:begin
8:end 16:end
manager
7:wait
Figure
3: Du-pair is incorrectly covered.
In figure 3, the generated paths cover both the define (14) and the use
(5) nodes, but the use node will be reached before the define node,
that is, de f ine 6OE use. If the data flow information reveals that the
definition of x in the worker thread should indeed be able to reach
the use of x in the manager thread, then we should attempt to find
a path coverage that will test this pair. The current path coverage
does not accomplish this.
4.1 Test Coverage Classification
The examples motivate a classification of all-du-path coverage. In
particular, we classify each du-path coverage generated by an algorithm
for producing all-du-path coverage of a parallel program as
acceptable or unacceptable, and w-runnable or non-w-runnable.
4.1.1 Acceptability of a du-path coverage
We call a set of paths PATH an acceptable du-path coverage, denoted
as PATH a , for the du-pair (de f ine, use) in a parallel program
free of infeasible paths of the sequential programming kind (see the
later section on infeasible paths), if all of the following conditions
are satisfied:
1. de f ine 2 p PATH; use 2 p PATH,
2. 8wait nodes w 2 p PATH, 9 a post node p 2 MP(w), such that
3. if 9(post;wait) 2 E S , such that de f ine OE post OE wait OE use,
then post;wait 2 p PATH.
4. 8n 2 p PATH where (n
These conditions ensure that the definition and use are included in
the path, and that any (post,wait) edge between the threads containing
the definition and use, and involved in the data flow from the
definition to the use are included in the path. Moreover, for each
sink of a thread creation edge, the associated source of the thread
creation edge is also included in the path. If any of these conditions
is violated, then the path coverage is considered to be unacceptable.
For instance, if only the wait is covered in a path coverage and a
matching post is not, the path coverage is not a PATH a . Figure 3,
where the define and use are covered in reverse order, shows another
instance that only satisfies the first two conditions, but fails to satisfy
the third condition.
4.1.2 W-runnability of a du-path coverage
We have seen through the examples that a parallel program may
cause infinite wait under a given path coverage, even when the
du-path coverage is acceptable. If a path coverage can be used to
generate a test case that does not cause an infinite wait in any thread,
we call the path coverage a w-runnable du-path coverage. When
a PATH is w-runnable, we represent it as PATHw . Although we
call a PATH w-runnable, we are not claiming that a PATHw is free
of errors, such as race conditions, or synchronization errors. More
formally, a PATH a is w-runnable if all of the following additional
conditions are satisfied:
1. For each instance of a wait, w t
(possibly represented
by the same node n t 2 p PATH), 9 an instance of a post, p s
instance of a wait or post
is one execution of the wait or post; there may be multiple
instances of the same wait or post in the program.
2.
6 9post nodes PATH such that
The first condition ensures that, for each instance of a wait in the
PATH, there is a matching instance of a post. However, it is not
required that for every instance of post, a matching wait is covered.
In other words, the following condition is not required: 8post nodes
a wait node w 2 MW(p), such that w 2 p PATH. The
second condition ensures that the generated path is free of deadlock.
We can develop algorithms to find PATH a automatically. How-
ever, we utilize user interaction in determining PATHw in the more
difficult cases, and sometimes have to indicate to the user that we
cannot guarantee that the execution will terminate on a given test
case (i.e., path coverage). In this case, the program can still be run,
but may not terminate. We will find a PATH a and report that this
path coverage may cause an infinite wait.
4.2 Infeasible Paths
Infeasible paths in a graph representation of a program are paths that
will never be executed given any input data. In control flow graphs
for a single thread, infeasible paths are due to data dependencies and
conditionals. In interprocedural graph structures, infeasible paths
are due to calling a function from multiple points. These kinds
of infeasible paths can occur in sequential programs, and can also
occur in parallel programs. In a parallel program, another kind of
infeasible path can also occur due to synchronization dependencies.
Infeasible paths due to synchronization dependencies can cause
deadlock or infinite wait at run-time.
Like most path finding algorithms, we assume that the paths we
identify are feasible with respect to the first causes. With regard to
infeasible paths due to synchronizations, our work uses a slightly
different characterization of paths, PATH a and PATHw . Our du-path
coverage algorithm finds paths in a way to guarantee that we will
have matching synchronizations included in the final paths, that is,
it finds paths that are PATH a . However, a deadlock situation could
occur for a path coverage that is a PATH a , but not PATHw . To
guarantee finding matching synchronizations, we currently assume
that matching post and wait operations both appear in a program. If
a program contains a post and no matching wait or vice versa, we
expect that the compiler will report a warning message prior to the
execution of our algorithm.
5 Related Work
In the context of sequential programs, several researchers have examined
the problems of generating test cases using path finding as
well as finding minimum path coverage [3, 11, 1]. All of these
methods for finding actual paths focus on programs without parallel
programming features and, therefore, cannot be applied directly to
finding all-du-path coverage for parallel programs. However, we
have found that the depth-first search approach and the approach of
using dominator and post-dominator trees can be used together with
extension to provide all-du-path coverage for parallel programs. We
first look at their limitations for providing all-du-path coverage for
parallel programs when used in isolation.
Gabow, Maheshwari, and Osterweil [3] showed how to use depth-first
search (DFS) to find actual paths that connect two nodes in a
sequential program. When applying DFS alone to parallel programs,
we claim that it is not appropriate even for finding PATH a , not to
mention PATHw . The reason is that although DFS can be applied
to find a set of paths for covering a du-pair, this approach does
not cope well with providing coverage for any intervening wait's,
and the corresponding coverage of their matching post's as required
to find PATH a . For example, consider a situation where there are
more wait nodes to be included while completing the partial path
for covering the use node. Since the first path is completed and a
matching post is not included in the original path, the first path must
be modified to include the post. This is not a straightforward task,
and becomes a downfall of using DFS in isolation for providing
all-du-path coverage for parallel programs.
Bertolino and Marr- e have developed an algorithm (which we call
DT-IT) that uses dominator trees (DT) and implied trees (IT) (i.e.,
post-dominator trees) to find a path coverage for all branches in a
sequential program [1]. A dominator tree is a tree that represents
the dominator relationship between nodes (or edges) in a control
flow graph, where a node n dominates a node m in a control flow
graph if every path from the entry node of the control flow graph to
must pass through n. Similarly, a node m postdominates a node p
if every path from p to the exit node of the control flow graph passes
through m.
The DT-IT approach finds all-branches coverage for sequential programs
as follows. First, a DT and an IT are built for each sequential
program. Edges in the intersection of the set of all leaves in DT and
IT, defined as unconstrained edges, are used to find the minimum
path coverage based on the claim that if all unconstrained edges are
covered by at least one path, all edges are covered. The algorithm
finds one path to cover each unconstrained edge. When one edge
is selected, one sub-path is found in DT as well as in IT. When one
node and its parent node in DT or IT are not adjacent to each other
in the control flow graph of the program, users are allowed to define
their own criteria for connecting these two nodes to make the path.
The two sub-paths, one built using DT and the other built using IT,
are then concatenated together to derive the final path coverage.
If we try to run this algorithm to find all-du-path coverage for parallel
programs, we need to find a path coverage for all du-pairs instead of
all-edges, which is a minor modification. However, this approach
will also run into the same problem as in DFS. That is, if some post
or wait is reached when we are completing a path, we need to adjust
the path just found to include the matching nodes. In addition, we
will run into another problem regarding the order in which the define
and use nodes are covered in the final path. For instance, in Figure
3, an incorrect path coverage will be generated using the DT-IT
approach alone. The final path will have de f ine 6OE use. Thus, using
this method alone cannot guarantee that we find a PATH a .
Yang and Chung [20] proposed a model to represent the execution
behavior of a concurrent program, and described a test execution
strategy, testing process and a formal analysis of the effectiveness
of applying path analysis to detect various faults in a concurrent
program. An execution is viewed as involving a concurrent path
(C-path), which contains the flow graph paths of all concurrent
tasks. The synchronizations of the tasks are modelled as a concurrent
route (C-route) to traverse the concurrent path in the execution,
by building a rendezvous graph to represent the possible rendezvous
conditions. The testing process examines the correctness of each
concurrent route along all concurrent paths of concurrent programs.
Their paper acknowledges the difficulty of C-path generation; how-
ever, the actual methodologies for the selection of C-paths and
C-routes are not presented in the paper.
6 A Hybrid Approach
In this section, we describe our extended "hybrid" approach to find
the actual path coverage of a particular du-pair in a parallel program.
There are actually two disjoint sets of nodes in a path used to cover
a du-pair in a parallel program: required nodes and optional nodes.
The set of required nodes includes the pthread create() calls as well
as the define node and use node to be covered, and the associated post
and wait with which the partial order de f ine OE use is guaranteed.
All other nodes on the path are optional nodes for which partial
orders among them are not set by the requirements for a PATH a .
However, if a wait is covered by the path, a matching post must
be covered. For instance, in figure 6, the nodes 2, 4, 7, 25, and
26 are required nodes, whereas all other synchronization nodes are
optional. Among the required nodes, the partial orders are uniquely
identified, whereas the partial orders among the optional nodes are
not. For example, it is acceptable to include either post 3 or post 4
first in a path coverage. We can even include wait 1 later than post 4
in a PATH a .
The DFS approach is most useful for finding a path that connects two
nodes whose partial order is known. The DT-IT approach is most
appropriate for covering nodes whose partial order is not known in
advance. Therefore, DFS is most useful for finding a path between
the required nodes, whereas the DT-IT approach is most useful for
ensuring that the optional nodes are covered.
Our algorithm consists of two phases. During the first phase, called
the annotate phase, the depth-first search (DFS) approach is employed
to cover the required nodes in the PPFG. Then, the DT-IT
approach is used to cover the optional nodes. After a path to cover
a node is found, all nodes in the path are annotated with a traversal
control number (TRN). In the second phase, called the path generation
phase, the actual path coverage is generated using the traversal
control annotations. We first describe the data structures utilized in
the du-pair path finding algorithm, and then present the details of
the algorithm.
The algorithm assumes that the individual du-pairs of the parallel
program have been found. Previous work computing reaching
definitions for shared memory parallel programs has been done by
Grunwald and Srinivasan [5].
6.1 Data Structures
The main data structures used in the hybrid algorithm are: (1)
a PPFG, (2) a working queue per thread to store the post nodes
that are required in the final path coverage, (3)a traversal control
associated with every node used to decide which
node must be included in the final path coverage and how many
iterations are required for a path through a loop, (4) a reverse post-order
number (RPO) for each node in the PPFG used in selecting
a path at loop nodes, (5) a decision queue per if-node, and (6) one
path queue per thread to store the resulting path.
6.2 The Du-path Finding Algorithm
We describe the du-path finding algorithm with respect to finding
du-pairs in which the define and use are located in different threads.
The handling of du-pairs with the define and use in the same thread
is a simplification of this algorithm. Figure 4 contains the annotate
the graph() algorithm, which accomplishes the annotate phase.
The traverse the graph() algorithm, shown in Figure 5, traverses the
PPFG and generates the final du-path coverage. We describe each
step of these algorithms in more detail here.
Phase 1: Annotating the PPFG.
Step 1. Initialize the working and decision queues to empty, and set
TRN of each node to zero.
Step 2. Use DFS to find a path from the pthread create of the thread
containing the define node to the define node, and then from the
2. Find a path to cover pthread_create and define nodes using dfs;
Algorithm annotate_the_graph()
Output:
Input: A DU-pair, and a PPFG
Annotated PPFG
1. Initialize TRN's, decision queues, and working queues;
From the define node, search for the use node using dfs;
3. Complete the two sub-paths using DT-IT.
5. /* process the synchronization nodes */
while ( any working queue not empty )
{ For each thread, if working queue not empty
{
Remove one node from the working queue;
if the node's TRN is zero
{
Find a path to cover this node
4. For each node in the complete paths:
Increment TRN by one;
If node is a WAIT,
Add matching nodes into appropriate working queues,
If node is an if-node,
Add the successor node in the path into decision queue;
For each node in the complete path:
If node is a WAIT,
If node is an if-node,
Add matching nodes into appropriate working queues,
Add the successor node in the path into decision queue;
Increment TRN by one;
Figure
Phase 1: Annotate the graph.
define node to the use node. When a post node is found in the path,
a matching wait is placed as the next node to be traversed, and the
search for the use node continues. Upon returning from each DFS()
call after a wait is traversed, return to the matching post before continuing
the search for the use node if not yet found.
Step 3. Apply DT-IT to complete the sub-paths found in Step 2. To
complete the sub-path in the thread containing the define node, use
the dominator tree of the define node and the post-dominator tree of
the post node that occurs after the define node in the sub-path just
found. Similarly, to complete the sub-path in the thread containing
the use node, use the dominator tree of a matching wait of the post
node and the post-dominator tree of the use node.
Step 4. For each node covered by either of these two paths,
(1) increment the node's TRN by one to indicate that the node should
be traversed at least once.
(2) If the node is a wait, add a matching post into the working queue
of the thread where the post is located.
(3) If the node is an if-node, add the Reverse Post-Order Num-
ber(RPO) of the successor node within the path into the if-node's
decision queue to ensure the correct branch selection in phase 2.
Step 5. While any working queue is not empty, remove one post
node from a thread's working queue, and find a path to cover the
node. Increment the TRN of the nodes in that path. In this way,
the TRN identifies the instances of each node to be covered. This
is particularly important in finding a path coverage for nodes inside
loops, where it might be necessary to traverse some loop body
nodes several times to ensure that branches inside the loop are covered
appropriately. Process wait and if-nodes in this path as in Step
4.
Algorithm traverse_the_graph()
For all threads
while ( current node's TRN > 0 and
current is not the end node )
{
add the current node to the result DU-path;
decrement TRN of current node by one;
{
Output: A DU-path
Input: An annotated PPFG
node of the thread;
if ( current is an if-node )
first node from decision queue;
else
delete the first node in the queue;
if ( current is a loop node )
else
smallest non-zero RPO;
current = successor node of current;
Figure
5: Phase 2: Generate the du-path coverage
Phase 2: Generating a du-path. For each thread, perform the
following steps:
Step 1. Let n be the begin node of the thread.
Step 2. While n's TRN > 0 and n is not the end node,
Add n to the path queue, which contains the resulting path coverage,
and decrement n's TRN. If n is an if-node, then let the new n be the
node removed from n's decision queue. Otherwise, if n is a loop
node, the successor with the smallest non-zero TRN is chosen to be
the new n. If the children have the same TRN, then the child with
the smallest RPO is chosen. Otherwise, if n is not an if-node or loop
node, let new n be the successor of n.
6.3 Examples
In this section, we use two examples to illustrate the hybrid ap-
proach. The first example illustrates generating a PATHw , while the
second example illustrates generating a (non-PATHw ) PATH a . Both
examples cover the du-pair with the define of X at node 4 and the
use of X at node 26 in Figure 6.
Example 1 Generating a PATHw :
During the second step of the first phase, the required nodes, including
the pthread create, de f ine, post 2 , wait 2 , and the use nodes, are
included in a partial path. The identified partial path is 2-3-4-5-7-
25-26. During the third step of the first phase, the two sub-paths are
completed, using the DT-IT approach. The two identified complete
paths are 1-2-3-4-5-7-8-9-3-11 for manager and 21-22-23-25-26-
. The TRN for every node along the two
paths equals 1 after step 4 except the loop node 22 for which the
TRN is 2. When node 9 was reached during this traversal, nodes 28
and were put into the working queues for worker 1 and worker 2 ,
respectively. When node 28 is taken out of the working queue
in step 5, it is found to have a nonzero TRN, and thus no more
paths are added. When node 35 is taken out of the working queue,
the TRN is zero. Hence, the path 31-32-33-34-35-32-36 is found
to cover node 35. With the annotated PPFG as input, the second
phase finds a final path of 1-2-3-4-5-7-8-9-3-11 for manager, 21-
for worker 2 .
worker worker
9:
1:
28:7:
create
POST
POST WAIT
POST 341 22beginbegin begin
34:n=5;
3:loop
POST
POST
25:
10:
30:end
2:pthread_
36:end
Figure
Example of the path finding algorithm
Example 2 Generating a non-PATHw :
During the second step of the first phase, the required nodes, including
the pthread create, de f ine, post 2 , wait 2 , and the use nodes, are
included in a partial path identified as 2-3-4-5-7-25-26. During the
third step of the first phase, the two sub-paths are completed, and
found to be 1-2-3-4-5-7-8-9-3-11 for manager and 21-22-23-25-26-
. The TRN for every node along the two
paths equals 1 after step 4 except for the loop node 22; the TRN for
node 22 equals 2. When node 9 was reached during this traversal,
nodes 28 and 35 were put into the working queues. Similarly, during
the fifth step of the first phase, two paths 21-22-23-25-26-27-28-22-
and 31-32-33-34-35-32-36, are found to cover nodes 28 and 35,
respectively. The final TRN's for this example label each node in
Figure
6. The second phase finds final paths of 1-2-3-4-5-7-8-9-3-11
for manager, 21-22-23-25-26-27-29-22-23-25-26-27-28-22-30 for
worker 1 and 31-32-33-34-35-32-36 for worker 2 . This set of paths
is not w-runnable because worker 1 has an infinite wait (at node 25).
It should be noted that regardless of the path constructed, the user
will have to validate that the w property holds.
6.4 Correctness and Complexity
Given a du-pair in a parallel program where the de f ine node and
the use node are located in two different threads, we show that
this algorithm indeed will terminate and find a PATH a . We first
introduce some lemmas before we give the final proof.
Lemma 1: TRN preserves the number of required traversals of each
node within a loop body.
During the first phase, the TRN of a node is incremented by one
each time a path is generated that includes that node. Therefore,
the number of traversals of each node in paths found during the first
phase is preserved by the TRN. Although the number of traversals
during the first phase is preserved, we are not claiming that these
nodes will indeed be traversed during the second phase that same
number of times. For nodes outside of a loop body, each node will
be traversed at most as many times as its TRN. But a node may not
need to be traversed that many times because the path generation
phase may reach the End node before the TRN of all nodes becomes
zero. Moreover, if there is no loop node in a program, only required
nodes will be traversed as many times as the TRN indicates.
Lemma 2: The decision queue and TRN of an if-node guarantee
that the same sequence of branches selected during the first phase
will be selected during the second phase.
When an if-node is found in a path during the first phase, one branch
is stored into the decision queue at that time. Hence, the number
of branches in the decision queue of a given if-node is equal to the
TRN of that if-node. Each time the if-node is traversed during the
second phase, one node is taken out of the decision queue and the
TRN of the if-node is decremented by one. Therefore, the sequence
that a branch is selected is preserved.
Lemma 3: DFS used during the first phase ensures de f ine OE post OE
wait OE use in the final generated path.
During the first phase, the required nodes will be marked by DFS
prior to any other nodes in the graph. This ensures that necessary
branches are stored in the decision queues first. By Lemma 2, these
branches will be traversed first during the second phase. Hence,
these nodes will be traversed in the correct order as given by the
relationships above. Therefore, Lemma 3 is valid.
Lemma 4: The working queues and TRN together guarantee the
termination of the Du-path Finding Algorithm.
We must show that both phases terminate.
Phase 1 Termination: We use mathematical induction on m, where
m represents the total number of pairs of synchronization nodes
covered in a path coverage.
Base case: there is only one pair of synchronization
calls, the required ones, they will be included in the path generated
by the DFS. The completion of the two partial paths will automatically
terminate since there are no extra post or wait's involved.
is an integer
greater than 1. We need to show that Lemma 4 is also true when
If the post and wait have been traversed previously, the
TRN of these nodes will be greater than zero. Hence, they will
not be included again during the first phase. When we generate a
new path to cover this pair of post and wait nodes, if they currently
have TRN=0, all other pairs of synchronization nodes will have
been covered. (by the induction step) Hence, this new pair of synchronization
calls will not trigger an unlimited number of actions.
Therefore, the annotation phase will terminate.
Phase 2 Termination: Since the TRN for each node must be traversed
is a finite integer, and the TRN is decremented each time
it is traversed during phase 2, the traversal during phase 2 will not
iterate forever. Whenever a node with zero TRN or the End node is
reached, the path generation phase terminates.
Finally, we show the proof of the following theorem.
Theorem 1: Given a du-pair in a shared memory, parallel program,
the hybrid approach terminates and finds a PATH a .
Proof: (1) By Lemma 4, the hybrid approach terminates. (2) To
show that a PATH a is generated, we must show that the conditions
described in the definition of PATH a are satisfied. By Lemma 1,
Lemma 2, and Lemma 3, we can conclude that the de f ine, use, the
required post, and wait nodes will be covered in the correct order.
Step 1 of Phase 1 ensures that all appropriate pthread create calls
are covered. Step 5 ensures that a matching post node regarding
each wait node included in the path is also covered. Therefore, all
conditions for a PATH a are satisfied. Q.E.D.
The running time of the hybrid approach includes the time spent
searching for the required nodes and time spent generating the final
path coverage. We assume that the dominator/implied trees and the
du-pairs have been provided by an optimizing compiler.
Theorem 2: For a given G = (V;E), and a du-pair (d, u), the total
running time of the du-path finding algorithm is equal to O(2 k
jEj)), where the total number of post or the wait calls is
denoted by k.
Proof: The running time for searching for the required nodes is
equal to O(jV To complete the two partial paths, the running
time is equal to O(2 k (jV where the total number of post
or the wait calls is denoted by k. Finally the second phase takes
time O(2 k (jV j + jEj)) to finish. Hence, the total running time is
equal to O(2 k (jV jEj)). For a given graph, usually the number
of edges is greater than that of the nodes. Then, the running time is
equal to O(2 k jEj). Q.E.D.
7 Other Parallel Paradigms
7.1 Rendezvous communication
Among other researchers, Long and Clarke developed a data flow
analysis technique for concurrent programs[9]. After their data
flow analysis is performed, we can apply a modified version of our
algorithm to find all-du-path coverage for a concurrent program
with rendezvous communication. In particular, we need to modify
the following: (1) construction of the PPFG, (2) definition of path
acceptability, and (3) the all-du-path finding algorithm.
First, to accommodate the request and accept operations in concurrent
programs to achieve rendezvous communication, the PPFG
needs to include a directed edge from a request to an accept node.
Secondly, since the execution of a request is synchronous, the second
condition in the definition of PATH a must be replaced by the
following two conditions:
2a. 8accept nodes a 2 p PATH, 9 a request node r 2 MR(a), such
that
2b. 8request nodes r 2 p PATH, 9 a accept node a 2 MA(r), such
that a 2 p PATH.
The set MR(a) is defined as the set of matching requests for the
accept node a; the set MA(r) is defined as the set of matching
accepts for the request node r.
Finally, during the first phase of the algorithm, whenever a request
or an accept is found, the matching node must be added into the
working queue.
7.2 Message Passing Programs
For analyzing message passing programs, a data flow analysis similar
to interprocedural analysis for sequential programs is needed to
compute the define-use pairs across processes. Several researchers
have developed interprocedural reaching definitions data flow analysis
techniques, even in the presence of aliasing in C programs [12, 6].
Although this analysis may find define-use pairs that may not actually
occur during each execution of the program, the reaching
definition information is sufficient for program testing. After this
information is computed, we can apply our algorithm to find all-du-
path coverage for a given C program with message passing library
calls. The Message Passing Interface(MPI)[4] standard is a library
of routines to achieve various types of inter-process communication,
i.e., synchronous or asynchronous send/receive operations.
To find all-du-path coverage for message passing programs, we
need to identify the type of send or receive operations first, i.e.,
synchronous or asynchronous. If the send operation is synchronous,
the definition of a PATH a must be modified to include both the send
and the matching receive in the path coverage similar to the change
made for supporting rendezvous-communication parallel programs.
If the send is asynchronous, we only need to replace post by send
in this paper. For each synchronous receive operation, we need to
replace the wait by a receive in our algorithms and definitions.
8 The della pasta Tool
The algorithm described in this paper has been incorporated into
della pasta, the prototype tool that we are building for parallel software
testing. The objective is to demonstrate that the process of test
data generation can be partially automated, and that the same tool
can provide valuable information in response to programmer queries
regarding testing. The current major functions of this tool are: (1)
finding all du-pairs in the parallel program, (2) finding all-du-path
coverage to cover du-pairs specified by the user, (3) displaying all-
du-path coverage in the graphic or text mode as specified by the
user, and (4) adjusting a path coverage when desired by the user.
della pasta consists of two major components: the static analyzer
which accepts a file name and finds all du-pairs as well as the
all-du-path coverage for each du-pair, and the path handler which
interacts with the user to display the PPFG, a path coverage, and
accept commands for displaying individual du-pair coverages and
for modifying a path. The static analyzer uses a modified version of
the Grunwald and Srinivasan algorithm[5] to find du-pairs in parallel
programs of this model, and is implemented using the compiler
optimizer generating tool called nsharlit, which is part of the SUIF
compiler infrastructure [8]. The path handler is built on top of dflo
which is a data-flow equation visualizing tool developed at Oregon
Graduate Institute. 2
The user interface of della pasta is illustrated in figure 7. On the left
of the screen, the PPFG is illustrated; on the right, the corresponding
textual source code is shown. A user can resize the data flow graph
as desired. The currently selected def-use pair is shown at the top of
the screen. The corresponding du-pair path coverage is depicted in
the PPFG as well as in the text as highlighted nodes and statements,
respectively. Clicking on any node in the PPFG will pop up an extra
window with some information about the node, and allow the user
to modify a path coverage.
In this example, a reader/writer program is illustrated in which
the main thread creates three additional threads: two readers and
one writer. The main thread then acts as one writer itself and
communicates with one of the two readers just created. These two
pairs of readers/writers will work independently in parallel. The
du-pair coverage shown in this example only involves two of the 4
threads in the program.
We are currently extending della pasta to use the du-pair coverage
This tool can be downloaded from the Internet. Refer to the
web site http://www.cse.ogi.edu:80/Sparse/dflo.html for details.
Figure
7: della pasta user interface
information already available through our static analyzer to answer
queries of the following kind: Will the test case execute successfully
without infinite wait caused by the path coverage? What other
du-pairs does a particular path coverage cover? We are also incorporating
our temporal testing techniques [17] into the tool in
order to provide testing aid for delayed execution in addition to the
traditional all-du-path testing.
9 Summary and Future Work
To our knowledge, this is the first effort to apply a sequential testing
criterion to shared memory or message passing parallel programs.
Our contributions include sorting out the problems of providing
all-du-path coverage for parallel programs, classifying coverages,
identifying the limitations of current path coverage techniques in
the realm of parallel programs, developing an algorithm that successfully
finds all-du-path coverage for shared memory parallel
programs, showing that it can be modified for message passing
and rendezvous communication, and demonstrating its effectiveness
through implementation of a testing tool.
The all-du-path coverage algorithm presented in this paper has some
limitations. The all-du-path algorithm requires that a PPFG be
constructed statically. If a PPFG cannot be constructed statically
to represent the execution model of a program, the analysis that
constructs the du-pairs may not produce meaningful du-pairs. Thus,
the number of worker threads is currently assumed to be known
at static analysis time. In the case where a clear operation is used
to clear an event before or after the wait is issued, our analysis will
report more du-pairs than needed. In testing, this only implies that
we indicate more test cases than really needed.
We are in the process of examining these limitations, while experimentally
analyzing the effectiveness of fault detection for parallel
programs using the all-du-paths criterion with della pasta, and
investigating other structural testing criteria for testing parallel programs
Acknowledgements
We would like to thank Barbara Ryder for her helpful comments in
preparing the final paper.
"The views and conclusions contained in this document are those
of the authors and should not be interpreted as representing the
official policies, either expr essed or implied, of the Army Research
Laboratory or the U.S. Government."
--R
A system to generate test data and symbolically execute programs.
On two problems in the generation of program test paths.
Using MPI: Portable Parallel Programming with the Message Passing Interface.
Data flow equations for explicitly parallel programs.
Efficient computation of interprocedural definition-use chains
Automated software test data generation.
Introduction to the SUIF compiler system.
Data flow analysis of concurrent systems that use the rendezvous model of synchronization.
On path cover problems in digraphs and applications to program testing.
Interprocedural def-use associations for C systems with single level pointers
Testing of concurrent software.
Structural testing of concurrent programs.
A formal framework for studying concurrent program testing.
The evaluation of program-based software test data adequacy criteria
The challenges in automated testing of multithreaded programs.
An algorithm for all-du-path testing coverage of shared memory parallel programs
Path analysis testing of concurrent programs.
--TR
The evaluation of program-based software test data adequacy criteria
Automated Software Test Data Generation
Path analysis testing of concurrent programs
Structural Testing of Concurrent Programs
Data flow equations for explicitly parallel programs
Efficient computation of interprocedural definition-use chains
Automatic Generation of Path Covers Based on the Control Flow Analysis of Computer Programs
Using MPI
Interprocedural Def-Use Associations for C Systems with Single Level Pointers
An Algorithm for All-du-path Testing Coverage of Shared Memory Parallel Programs
--CTR
Arkady Bron , Eitan Farchi , Yonit Magid , Yarden Nir , Shmuel Ur, Applications of synchronization coverage, Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, June 15-17, 2005, Chicago, IL, USA
C. Michael Overstreet, Improving the model development process: model testing: is it only a special case of software testing?, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California
John Penix , Willem Visser , Seungjoon Park , Corina Pasareanu , Eric Engstrom , Aaron Larson , Nicholas Weininger, Verifying Time Partitioning in the DEOS Scheduling Kernel, Formal Methods in System Design, v.26 n.2, p.103-135, March 2005 | parallel programming;all-du-path coverage;testing tool |
272132 | The Static Parallelization of Loops and Recursions. | We demonstrate approaches to the static parallelization of loops and recursions on the example of the polynomial product. Phrased as a loop nest, the polynomial product can be parallelized automatically by applying a space-time mapping technique based on linear algebra and linear programming. One can choose a parallel program that is optimal with respect to some objective function like the number of execution steps, processors, channels, etc. However,at best,linear execution time complexity can be atained. Through phrasing the polynomial product as a divide-and-conquer recursion, one can obtain a parallel program with sublinear execution time. In this case, the target program is not derived by an automatic search but given as a program skeleton, which can be deduced by a sequence of equational program transformations. We discuss the use of such skeletons, compare and assess the models in which loops and divide-and-conquer resursions are parallelized and comment on the performance properties of the resulting parallel implementations. | Introduction
We give an overview of several approaches to the static parallelization of loops and recursions 1 ,
which we pursue at the University of Passau. Our emphasis in this paper is on divide-and-
conquer recursions.
Static parallelization has the following benefits:
1. Efficiency of the target code. One avoids the overhead caused by discovering parallelism
at run time and minimizes the overhead caused by administering parallelism at run
time.
2. Precise performance analysis. Because the question of parallelism is settled at compile
time, one can predict the performance of the program more accurately.
3. Optimizing compilation. One can compile for specific parallel architectures.
One limitation of static parallelization is that methods which identify large amounts of
parallelism usually must exploit some regular structure in the source program. Mainly, this
1 We can equate loops with tail recursions, as is done in systolic design [28, 36].
structure concerns the dependences between program steps, because the dependences impose
an execution order. Still, after a source program has been "adapted" to satisfy the requirements
of the parallelization method, the programmer need think no more about parallelism
but may simply state his/her priorities in resource consumption and let the method make all
the choices.
We illustrate static parallelization methods for recursive programs on the example of the
polynomial product. We proceed in four steps:
Sect. 2. We provide a specification of the polynomial product. This specification can be
executed with dynamic parallelism. The drawback is that we have no explicit control
over the use of resources.
Sect. 3. We refine the specification to a double loop nest with additional dependences. We
parallelize this loop nest with the space-time mapping method based on the polytope
model [29]. This method searches automatically a large number of possible parallel
implementations, optimizing with respect to some objective function like the number of
execution steps, processors, communication channels, etc.
Sect. 4. We refine the specification to a divide-and-conquer (D&C) algorithm which has
fewer dependences than the loop nest. This is the central section of the paper. For
D&C, parallelization methods are not as well understood as for nested loops. Thus, one
derives parallel implementations by hand, albeit formally, with equational reasoning.
However, most of the parallelization process is problem-independent. The starting point
is a program schema called a skeleton [9]. We discuss two D&C skeletons, instantiated
to the polynomial product, and their parallelizations:
Subsect. 4.1. The first is a skeleton for call-balanced fixed-degree D&C, which we
parallelize with an adapted space-time mapping method based on the method for
nested loops [25]. The target is again a parallel loop nest, which can also be
represented as an SPMD program.
Subsect. 4.2. The second skeleton is a bit less general. It is parallelized based on the
algebraic properties of its constituents [20]. It is used to generate coarser-grained
parallelism in the form of an SPMD program.
In this paper, we are mainly comparing and evaluating. The references cited in the individual
sections contain the full details of the respective technical development. Our comparison is
concerned with the models and methods used in the parallelization and with the asymptotic
performance of the respective parallel implementations.
2 The polynomial product
Our illustrating example is the product of two polynomials A and B of degree n, specified in
the quantifier notation of Dijkstra [14]:
Let us name the product polynomial C :
Note that this specification does not prescribe a particular order of computation for the
cumulative sums which define the coefficients of the product polynomial.
We can make this specification executable without having to think any further. A simple
switch of syntax to the programming language Haskell [40] yields:
c a b
Haskell will reduce the sums in some total order, given by its sequential semantics, or in
some partial order, given by its parallel semantics. The programmer pays for the benefit of not
having to choose the order with a lack of control over the use of resources in the computation.
The main resources are time (the length of execution) and space (the number of processors),
others are the number of communication channels, the memory requirements, etc.
3 A nested loop program
3.1 From the source program to the target program
To apply loop parallelization, one must first impose a total order on the reductions in the
specification. This means adding dependences to the dependence graph of the specification.
The choice of order may influence the potential for parallelism, so one has to be careful. We
choose to count the subscript of A up and that of B down; automatic methods can help in
exploring the search space for this choice [3]. Another change we make is that we convert the
updates of c to single-assignment form, which gives rise a doubly indexed variable - c; there
are also automatic techniques for this kind of conversion [15]. This leads to the following
program, in which the elements
contain the final values of the coefficients
of the product polynomial:
for to n do
do
The dependence graph of a loop nest with affine bounds and index expressions forms a
polytope, in which each loop represents one dimension. The vertices of graphs of this form
can be partitioned into temporal and spatial components. This is done by linear algebra and
integer linear programming. In our example, the choices are fairly obvious.
Consider Fig. 1 for the polynomial product. The upper left shows the polytope on the
integer lattice; each dot represents one loop step. The upper right is the dependence graph;
only the dependences on - c are shown, since only - c is updated. The lines on the lower left
represent "time slices": all points on a line can be executed in parallel. This choice is minimal
with respect to execution time. Note that dependence arrows must not be covered by these
lines! The lines in the lower right represent processors: a line contains the sequence of
loop steps executed by a fixed processor. These lines may not be parallel to the temporal
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
@ @R
source polytope data dependences- i
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
@ @
temporal partitioning spatial partitioning
Figure
1: The polytope and its partitionings- t
2\Deltan
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s
s synchronous program:
seqfor t := 0 to n do
parfor p := t to t +n do
asynchronous program:
parfor p := 0 to 2 n do
seqfor t := max(0;
to min(n; p) do
Figure
2: The target polytope and the target programs
lines. We have chosen them to cover the dependences, i.e., we have minimized the number of
communication channels between processor (to zero).
The partitionings in time and space can be combined to a space-time mapping, an affine
transformation to a coordinate system in which each axis is exclusively devoted to time (i.e.,
represents a sequential loop) or space, i.e. represents a parallel loop. This is like adjusting a
"polarizing filter" on the source polytope to make time and space explicit. Fig. 2 depicts the
target polytope and two target programs derived from it. Again, we have a choice of order,
namely the order of the loops in the nest. If the outer loop is in time and the inner loop in
space, we have a synchronous program with a global clock, typical in the SIMD style. To
enforce the clock tick, we need a global synchronization after each step of the time loop. If we
order the loops vice versa, we have an asynchronous program, with a private clock for each
processor, typical in the SPMD style. Here, we need communication statements to enforce the
data dependences, but we have chosen the spatial partitioning such that no communications
are required-at the expense of twice the number of processors necessary. In both programs,
the code of the loop body is the same as in the source program, only the indices change
because the inverse of the space-time mapping must be applied: (i
3.2 Complexity considerations
The most interesting performance criteria-at least the ones we want to consider here-are
execution time, number of processors and overall cost. The cost is defined as the product of
execution time and number of processors. One is interested in cost maintenance, i.e., in the
property that the parallelization does not increase the cost complexity of the program.
With the polytope model, the best time complexity one can achieve is linearity in the
problem size: 2 at least one loop must enumerate time, i.e., be sequential. In the pure version
of the model, one can usually get away with just one sequential loop [5]. The remaining loops
enumerate space, i.e., are parallel. This requires a polynomial amount of processors since the
loops bounds are affine expressions.
The cost is not affected by the parallelization: one simply trades between time and space,
the product of time and space stays the same.
3.3 Evaluation
Let us sum up the essential properties of the polytope model for the purpose of a comparison:
1. The dependence graph can be embedded into a higher-dimensional space. The dimensionality
is fixed: it equals the number of loops in the loop nest.
2. The extent of the dependence graph in each dimension is usually variable: it depends
on the problem size. However, a very important property of the polytope model is that
the complexity of the optimizing search is independent of the problem size.
3. Each vertex in the dependence graph represents roughly the same amount of work.
More precisely, we can put a constant bound on the amount of work performed by any
one vertex.
2 The only exception is the trivial case of no dependences at all, in which all iterations can be executed in
one common step.
4. There is a large choice of problem-dependent affine dependences. Thus, given a loop
nest with its individual dependences, an automatic optimizing search is performed to
maximize parallelism.
5. One can save processors by "trading" one spatial dimension to time, i.e., emulating a
set of processors by a single processor.
6. The end result is a nest of sequential and parallel loops with affine bounds and dependences
7. If one loop is sequential and the rest are parallel (which can always be achieved) [5], one
obtains an execution time linear in the problem size. But, to save processors, one can
trade space to time at the price of an increased execution time complexity. In particular,
one can make the number of processors independent of the problem size by partitioning
the resulting processor array into fixed-size "tiles" [13, 39].
4 A divide-and-conquer algorithm
Rather than enforcing a total order on the cumulative summation in specification (2) of
the coefficients of the product polynomial, we can accumulate the summands with a D&C
algorithm by exploiting the associativity of polynomial addition \Phi on the left side of the
following
We can write this more explicitly, showing the degrees and the variable of the polynomials.
Let m=n div 2 and assume for the rest of this paper, for simplicity, that n is a power of 2:
a
The suffix l stands for "lower part", h for "higher part" of the polynomial; a and b are the
input polynomials.
The dependence graph of this algorithm is depicted in Fig. 3; c, d , e, and f , the resulting
polynomials of the four subproblems. This is our starting point for a parallelization which
will give us sublinear execution time.
The fundamental difference in the parallelization of D&C as opposed to nested loops is
that there is no choice of dependences: the dependence graph is always the same as the call
graph. Since we have no problem-dependent dependences, we have no need for an automatic
parallelization based on them. Instead, we can take the skeleton approach [9]: we can provide
a program schema, a so-called algorithmic skeleton, for D&C which is to be filled in with
further program pieces-we call them customizing functions-in order to obtain a D&C ap-
plication. Then, our task is to offer for the D&C skeleton one or several high-quality parallel
implementations, we call these architectural skeletons. This may, again, involve a search, but
the search space is problem-independent and, thus, need not be redone for every application.
For the user at the application end, the only challenge that remains is to cast the problem in
the form of the algorithmic skeleton. Alternatively, the user might develop an architectural
skeleton with even better performance by exploiting problem-specific properties of his/her
application.
combine
divide
recursion
ah bh ah bl al bh al
ch cl dh dl eh el fh fl
ch cl fh fl
bl
dh dl
el
eh
al bh bl
Figure
3: Call graph of the D&C polynomial product
Research on the parallelization of D&C is at an earlier stage than that of nested loops.
Many different algorithmic skeletons-and even more architectural skeletons-can be envi-
sioned. No common yardstick by which to evaluate them has been found as of yet.
We discuss two algorithmic skeletons and the respective approaches to their parallelization.
4.1 Space-time mapping D&C
4.1.1 From the Source Program to the Target Program
The call graph in Fig. 3 matches an algorithmic skeleton for call-balanced fixed-degree D&C
which we have developed. The skeleton is phrased as a higher-order function in Haskell [25]:
divcon
divcon k basic divide
where solve indata = if length indata j 1
then map basic indata
else let
solve (transpose x
in (-l :map fst l ++ map snd l ) (map combine y)
else error "list length"
right
Let us comment on a few functions which you will see again in this paper:
map op xs applies a unary operator op to a list xs of values and returns the list of the
results; map is one main source of parallelism in higher-order programs.
xs++ys returns the concatenation of list xs with list ys.
zip xs ys "zips" the lists xs and ys to a list of corresponding pairs of elements.
The further details of the body of divcon are irrelevant to the points we want to make in
this paper-and indeed irrelevant to the user of the skeleton. All that matters is that it is
a higher-order specification, which specifies a generalized version of the schema depicted in
Fig. 3, and whose parameters k , basic, divide and combine the caller fills with the division
degree and with appropriate functions for computing in the basic case and dividing and
combining in the recursive case.
The example below shows, using the names in Fig. 3, how to express the polynomial product
in terms of the divcon skeleton. The polynomials to be multiplied have to be represented
as lists of their coefficients in order:
x
where preadapt x
postadapt fst z ++ map snd z
divide (ah; bh) (al
combine [(ch; cl); (dh; dl); (eh; el); (fh ; fl )]
The skeleton takes as input and delivers as output lists of size n. The operands and
result of the polynomial product have to be formatted accordingly: function preadapt zips
both input polynomials to a single list, and function postadapt unpacks the zipped higher and
lower parts for the result again.
Given unlimited resources, it is clear without a search what the temporal and the spatial
partitioning should be: each horizontal layer of the call graph should be one time slice.
This seems to suggest a two-dimensional geometrical model with one temporal axis (pointing
down) and one spatial axis (pointing sideways). However, it pays to convert the call graph to
a higher-dimensional structure. The reason is that the vertices in the graph represent grossly
unequal amounts of work. In other words, the amount of work of any one vertex cannot be
capped by a constant: because of the binary division of data, a node in some fixed layer of
the divide phase represents double the amount of work as a node in the layer below, and the
reverse applies for the combine phase. This behaviour holds for all algorithms, which fit into
this skeleton.
We obtain a graph in which the work a node represents is bounded by a constant if we
split a node which works on aggregate data into a set of nodes each of which works on atomic
data only. This fragmentation of nodes is spread across additional dimensions, yielding the
higher-dimensional graph of Fig. 4. Time now points into depth and, for the given size of
the call graph, each time slice is two-dimensional, not just one-dimensional. With increasing
problem size, further spatial dimensions are added.
A parallel loop program, which scans this graph in a similar manner as it would scan a
polytope, can be derived [25]; here, r is the number of recursive calls log n), the elements
reading input data
divide in dim 0
solve basic cases
divide in dim 1
combine in dim 0
result,
depth
dimension
combine
in dim 1
Figure
4: Higher-dimensional call graph of the D&C polynomial product
of AB are pairs of input coefficients, and the elements of C are pairs of output coefficients of
the higher and lower result polynomial:
seqfor to r do
parfor
parfor
seqfor
parfor
The program consists of a sequence of three loop nests, for the divide, sequential, and
combine phase.
The loops on d enumerate the levels of the graph and are therefore sequential, whereas the
loops on q are parallel because they enumerate the spatial dimensions. The spatial dimensions
are indexed by the digits of q in radix k representation. This allows us to describe iterations
across an arbitrary number of dimensions by a single loop, which makes the text of the
program independent of the problem size. q (k)
denotes the vector of the digits of q in radix
k . q (k)
[d ] selects the dth digit. In accesses of the values of the points whose index differs only
in dimension d , we use the notation (q (k)
the number, which one obtains from q by
replacing the dth digit by i . This representation differs from target programs in the polytope
model, in which each dimension corresponds to a separate loop.
The data is indexed with a time component d and a space component q . The divide
and combine functions are given the actual coordinate as a parameter in order to select the
appropriate functionality for the particular subproblem for divide resp. data partition for
combine.
Such loop programs can also be derived formally by equational reasoning [26].
This program is data-parallel. Therefore, it can be implemented directly on SIMD ma-
chines. Additionally, the program can be transformed easily to an SPMD program for parallel
machines with distributed memory using message passing. The two-dimensional arrays now
become one-dimensional, because the space-component has been projected out by selecting a
particular processor. The all-to-all communications are restricted to groups of k processors.
Processor q executes the following:
seqfor to r do
all-to-all
(list of values received by all-to-all) ;
seqfor
all-to-all
(list of values received by all-to-all)
One is interested in transforming the computation domain in Fig. 4 further in two ways:
1. As in loop parallelization, this approach has the potential of very fine-grained paral-
lelism. As in loop parallelization, spatial dimensions can be moved to time to save
processors. This is more urgent here, since the demand for processors grows faster with
increasing problem size.
2. If the spatial part of the computation domain remains of higher dimensionality after
this, its dimensionality can be reduced as depicted in Fig. 5. This is done, e.g., if the
target processor topology is a mesh. It works because the extent of each dimension is
fixed.
x
x
y
z
Figure
5: Dimensionality reduction of the computation domain
4.1.2 Complexity Considerations
For the implementation, it is not very efficient to assign each basic problem to a separate
physical processor. Instead, spatial dimensions are mapped to time. The result is s slightly
modified SPMD program. In brief, the operations on single elements are replaced by operations
on segments of data.
Let n be the size of the polynomials, and P the number of processors. In the basic phase
on segments, the work is equally distributed among the processors, i.e., each processor is
responsible for n=2 log P= log
elements. The time for computing the basic phase on
segments is therefore O(n 2 =P). The computation in the divide and combine phase takes
O(log n) steps. In each step, a segment of size n=
P has to be divided or combined in
parallel. The entire computation time for both phases is therefore in O(log n \Deltan =
if the dimensionality of the target mesh equals the number of dimensions mapped to space,
the total time is in:
O
The execution time is sublinear if the number of processors is asymptotically greater than
the problem size, and it maintains the cost of O(n 2 ) if the number of processors is asymptotically
not greater than O(n n). The best execution time, which can be achieved under
cost maintenance is O(log 2 n).
If the dimensionality of the target topology is taken into account, our calculations have
revealed the following:
1. Sublinear execution time can only be achieved if the dimensionality is at least 3.
2. The computation is sublinear and cost-maintaining for a cubic three-dimensional mesh
if the number of processors is asymptotically between n and n 1:2 .
There is an algorithm for polynomial product with a sequential time complexity of O(n log 3 ),
the so-called Karatsuba algorithm [1, Sect. 2.6]. It has a division degree of 3 and can be
expressed with our skeleton, with whose parallel implementation [25] the algorithm is cost-
maintaining for reasonable problem sizes. Our experiments have shown that the sequential
version of the Karatsuba algorithm beats the sequential version of our conventional algorithm
if both polynomials have a size of at least 16. Since its subproblems are slightly more
computation-intensive, the parallel version of the Karatsuba algorithm (with our skeleton) is
a bit slower than the parallel version of the conventional algorithm, but one saves processors
(precisely,
4.1.3 Evaluation
What are the properties of this space-time mapping model, compared with the polytope model
of the previous section?
1. The dimensionality of the call graph is variable: it equals the number of layers of the
divide phase, which depends on the problem size.
2. The extent of the dependence graph in each spatial dimension is fixed: it is the degree
of the problem division.
3. Each vertex in the call graph represents the same amount of work.
4. There is no choice of dependences and no search for parallelism is necessary.
5. The only variety in parallelism is given by the option to trade off spatial dimensions to
time.
6. The end result is, again, a nest of sequential and parallel loops.
7. The upper bound of the temporal loop is logarithmic in the problem size, and the upper
bound of the spatial loop is exponential in the upper bound of the temporal loop, i.e.,
polynomial in the problem size. When looking at the computation domain (Fig. 4),
the extent of each spatial dimension is constant, but the number of spatial dimensions
grows with the problem size.
Sublinear execution time (in a root of the problem size) are possible on mesh topologies,
but the conditions for maintaining cost-optimality in this case are very restrictive.
4.2 Homomorphisms
A very simple D&C skeleton is the homomorphism. It does not capture all D&C situations,
and it is defined most often for lists [7, 37], although it can also be defined for other data
structures, e.g., trees [17] and arrays [33].
4.2.1 From the source program to the target program
Unary function h is a list homomorphism [7] iff its value on a concatenation of two lists can
be computed by combining the values of h on the two parts with some operation fi :
The significance of homomorphisms for parallelization is given by the promotion property, a
version of which is as follows:
red dist (6)
This equality is also proved by equational reasoning. In the literature, one has used the Bird-
Meertens formalism (BMF) [7], an equational theory for functional programs in which red
and map are the basic functions on lists: red reduces a list of values with a binary operator
(which, in our case, inherits associativity from list concatenation) and returns the result value,
and map we have seen in the previous subsection. Both red and map have a high potential
for parallelism: red can be performed in time logarithmic in the length of the list and map
can be performed in constant time, given as many processors as there are list elements.
The third function appearing in the promotion property, dist (for distribute), partitions
an argument list into a list of sublists; it is the right inverse of reduction with concatenation:
red (++)
Equality (6) reveals that every homomorphism h can be computed in three stages: (1)
an input list is distributed, (2) function h is computed on all sublists independently, (3) the
results are combined with operator fi . The efficiency of this parallel implementation depends
largely on the form of operation fi . E.g., there is a specialization of the homomorphism
skeleton, called DH (for distributable homomorphism), for which a family of practical, efficient
implementations exists [19, 21].
The similarity of (3) and (5) is obvious: h should be the polynomial product fi, and
operation fi should be polynomial addition \Phi. However, there are two mismatches:
1. Operations fi and \Phi are defined on polynomials of possibly different degree. Thus, the
list representation of a polynomial needs to be refined to a pair (int ; list ), where int is
the power of the polynomial and list is the list of the coefficients. For simplicity, we
ignore this subtlety in the remainder of this paper.
2. A more serious departure from the given homomorphism skeleton is that polynomial
product fi, our equivalent of h, requires two arguments, not one. To match this with the
skeleton, we might give h a list of coefficient pairs, but this destroys the homomorphism
property: in the format provided by h, the product of two polynomials cannot be
constructed from the products of the polynomials' halves.
It is just as well that we cannot fit the (binary) polynomial product into the (unary) homomorphism
skeleton. The second argument gives us an additional dimension of parallelism which
the unary homomorphism cannot offer: because we have two arguments each of whom is to
be divided, we obtain four partial results to be combined rather than two, as prescribed by
the skeleton. To exploit the quadratic parallelism, we use a different, binary homomorphism
skeleton:
Now, the polynomial product fits perfectly. The resulting promoted, two-dimensional skeleton
does everything twice-once for each dimension: dividing (with dist ), computing in parallel
(with map) and combining (with red ); we write map 2 f for map (map f ) and zip 2 for
where
dist \Theta map (copy) ffi dist) (8)
The complication of distribute is due to the fact that list portions must be spread out across
two dimensions now. Note also that we have not provided a definition of the two-dimensional
reduction: red 2 (fi ) can be computed in two orders: row-major or column-major. Actually,
each of the two choices leads to an equal amount of parallelism.
However, by a problem-specific optimization of the combine stage, we can do even bet-
ter. Fig. 6 depicts this optimized solution on a virtual square of processors (we make no
assumptions about the physical topology). Exploiting the commutativity of the customizing
operator \Phi, we can reduce first along the diagonals-the corresponding polynomials have
equal power-and then reduce the partial results located at the northern and eastern bor-
ders. The latter step can be improved further to just pairwise exchange between neighbouring
processors if we allow for the result polynomial product to be distributed blockwise along the
border processors.
The three-stage format of the promoted homomorphism skeleton suggests an SPMD program
which also has three stages. For the binary homomorphism, optimized for the polynomial
product, every processor q executes the following MPI-like program:
distribute compute combine
Figure
Three stages of the two-dimensional promoted homomorphism skeleton
broadcast (A) in row ;
broadcast (B) in column ;
reduce (\Phi) in diagonal(q) ;
exchange-neighbours
This program does not show loops explicitly-but, of course, they are there. The outer,
spatial loops are implicit in the SPMD model and the inner, sequential loop is hidden in
the call of function PolyProd which is a sequential implementation of fi. With MPI-like
communications, this SPMD program could be the point where the programmer stops the
refinement and the machine vendor takes over. Alternatively, the user can him/herself program
broadcasts, reductions and exchanges with neighbours and define a suitable physical
processor topology, e.g., a mesh of trees [18].
4.2.2 Complexity considerations
We consider multiplying two polynomials of degree n on a virtual square of p 2 processors.
The time complexity with pipelined broadcasting and reduction is [18]:
\Deltam (n; p)
where m(n; p+1)). The value of p can be chosen between 1 and n.
If p =n=
log n then t =O(logn), which yields the optimal, logarithmic time complexity
on processors. The cost is O(n 2 ), and is thus maintained.
Other interesting cases are:
so the parallelization has not worsened the sequential cost;
=O(log n) on n 2 processors; this solution yields the optimal time
but is clearly not cost-maintaining;
log n: a cost-maintaining solution with t =O(log 2 n) on (n= log n) 2 processors;
n: t =O(n) on n processors: equal to the systolic solution of Sect. 3 and cost-
maintaining.
In practice, the processor number is an arbitrary fixed value and the problem size n is relatively
large. Then the term (n=p) 2 dominates in the expression of the time complexity which
guarantees a so-called scaled linear speed-up [35]. This term can be improved to O((n=p) log 3 )
or to O((n=p)\Delta log(n=p)) by applying the Karatsuba or FFT-based algorithm, respectively,
in the processors at the compute stage.
Whether the Karatsuba algorithm can be phrased as a homomorphism is an open question.
4.2.3 Evaluation
Let us review the parallelization process in the homomorphism approach. Actually, it is not
very different from the skeleton approach of the previous section. One just uses a different
skeleton and is led in the parallelization by algebra rather than by geometry.
1. The homomorphism skeleton is more restrictive than the Haskell skeleton in the previous
section, and also more restrictive than the earliest D&C skeleton [34], in which it corresponds
to a postmorphism (see [21] for a classification of D&C skeletons). The strength
of the homomorphism is its direct correspondence with a straight-forward three-stage
SPMD program. For the polynomial product, it yields a parallel implementation which
is both time-optimal and cost-maintaining.
Time optimality cannot be achieved on a mesh topology with constant dimensional-
ity. In the homomorphism approach, we obtain a topology-independent program with
MPI-like communication primitives. The best implementations of these primitives on
topologies such as the hypercube and the mesh of trees are time-optimal [27].
2. As many other skeletons, the homomorphism skeleton can come in many different
varieties: for unary, binary, ternary operations, for lists and other data structures.
At the present state of our understanding, all these versions are developed separately.
3. Even if all these skeletons are available to the user, an adaptation problem remains.
This is true for the previous approaches as well. E.g., in loop parallelization, a dependence
which does not satisfy the restrictions of the model is replaced by a set of
dependences which do [29]. In the previous subsection, we format the input and the
output of polynomial product with adaptation functions to make it match with our
Haskell skeleton.
The homomorphic form of a problem may exist but be not immediately clear. An
example is scan [20]. Other algorithms can be turned into a D&C and, further, into a
homomorphic form with the aid of auxiliary functions [10, 38].
4. The application of the promotion property gives us a parametrized granularity of parallelism
which is controlled by the size of the chunks in which distribution function dist
splits the list. Depending on the available number of processors, input data can be distributed
more coarsely or finely, down to a single list element per processor. The only
requirement on the number of processors in the case of the two-dimensional homomorphism
is that it be a square, which is not as restrictive as in the skeleton of the previous
subsection, where a power of the division degree k is required. Moreover, homomorphic
solution is not restricted to polynomials whose length is a power of 2.
5. Note that the promotion property is only applied once-as opposed to the previous
subsection, in which we parallelize at each level of the call graph. This decreases the
amount of necessary communication. The number of parallelized levels depends on
the choice of granularity; all remaining levels are captured by a call of the sequential
implementation PolyProd of h2. This enables an additional optimization: processors
can call an optimal sequential algorithm for the given problem, rather than the algorithm
which was chosen for the parallelization. In our case, PolyProd can be the more efficient
Karatsuba or FFT-based algorithm for the polynomial product.
Summary
Let us summarize the present state of the art of the static parallelization of loops and recur-
sions, as illustrated with the polynomial product.
Static parallelization works best for programs which exhibit some regular structure. The
structural requirements can be captured by restrictions on the form of the program, but
many applications will not satisfy these restrictions immediately. Thus, static parallelization
is definitely not for "dusty decks".
However, many algorithms can be put into the required form and parallelized. In partic-
ular, certain computation-intensive application domains like image, signal, text and speech
processing, numerical and graph algorithms are candidates for a static parallelization. Dense
data structures hold more promise of regular dependences, but sparse data structures might
also be amenable to processing with while loop nests or with less regular forms of parallel
D&C.
5.1 Loop parallelization
Loop parallelization is much better understood than the parallelization of divide-and-conquer.
The polytope model has been extended significantly recently:
1. Dependences and space-time mappings may be piecewise affine (the number of affine
pieces must be constant, i.e., independent of the problem size) [16].
2. Loop nests may be imperfect (i.e., not all computations must be in the innermost loop)
[16].
3. Upper loop bounds may be arbitary and, indeed, unknown at compile time [23].
A consequence of (3) is that while loops can be handled [12, 22, 30]. This entails a serious
departure from the polytope model.
The space-time mapping of loops is becoming a viable component of parallelizing compilation
[31]. Loop parallelizers that are based on the polytope model include Bouclettes
[8], LooPo [24], OPERA [32], Feautrier's PAF, and PIPS [2]. However, recent sophisticated
techniques of space-time mapping have not yet filtered through to commercial compilers. In
particular, automatic methods for partitioning and projecting (i.e., trading space for time)
need to be carried through to the code generation stage. Most large academic parallelizing
compilation projects involve also loop parallelization techniques that are not phrased in (or
even based on) the polytope model. Links to some of them are provided in the Web pages
cited here.
A good, unhurried introduction to loop parallelization with an emphasis on the polytope
model is the book series of Banerjee [4, 5, 6].
5.2 Divide-and-conquer parallelization
For the parallelization of D&C, there is not yet a unified model, in which different choices
of parallelization can be evaluated with a common yardstick and compared with each other.
The empirical approach taken presently uses skeletons: algorithm patterns with a high potential
for parallelism are linked with semantically equivalent architectural patterns which
provide efficient implementations of these algorithms on available parallel machines. This
approach makes fewer demands on compiler technology. However, it expects the support of a
"systems programming" community which provides architectural skeletons for existing parallel
machines. The application programmer can then simply look for the schema in a given
skeleton library, and adapt his/her application to this schema.
In the last couple of years, the development and study of skeletons has received an increasing
amount of attention and a research community has been forming [11]. The skeleton
approach can become a viable paradigm for parallel programming if
1. the parallel programming community manages to agree on a set of algorithmic skeletons
which capture a large number of applications and are relatively easy to fill in, and
2. the parallel machine vendors community, or some application sector supporting it, succeeds
in providing efficient implementations of these skeletons on their products.
One can attempt to classify the best parallel implementations of some skeleton, which represents
a popular programming paradigm, by tabulating special cases. We have done this for
the paradigm of linear recursion [41]. The interesting special cases are copy , red and scan.
Compositions of these cases can be optimized further.
5.3 Conclusions
Ultimately, one will have to wait and see whether the static or some dynamic approach to
parallelism will win the upper hand. Since parallelism stands for performance, the lack of
overhead and the precision of the performance analysis are two things in favor of static as opposed
to dynamic parallelism-for problems which lend themselves to a static parallelization.
Acknowledgements
This work received financial support from the DFG (projects RecuR and RecuR2 ) and from
the DAAD (ARC and PROCOPE exchange programs). Thanks to J.-F. Collard for a reading
and comments. The Parsytec GCel 1024 of the Paderborn Center for Parallel Computing
used for performance measurements.
--R
The Design and Analysis of Computer Algorithms.
PIPS: A framework for building interprocedural compilers
Efficient exploration of nonuniform space-time transformations for optimal systolic array synthesis
Loop Transformations for Restructuring Compilers: The Foundations.
Loop Parallelization.
Dependence Analysis.
Lectures on constructive functional programming.
Reference manual of the Bouclettes parallelizer.
Algorithmic Skeletons: Structured Management of Parallel Computation.
Parallel programming with list homomorphisms.
Theory and practice of higher-order parallel programming
Automatic parallelization of while-loops using speculative execution
Regular partitioning for synthesizing fixed-size systolic arrays
Predicate Calculus and Program Semantics.
Array expansion.
Automatic parallelization in the polytope model.
Upwards and downwards accumulations on trees.
From transformations to methodology in parallel program development: a case study.
Systematic efficient parallelization of scan and other list homomorphisms.
Systematic extraction and implementation of divide-and-conquer paral- lelism
Formal derivation of divide-and-conquer programs: A case study in the multidimensional FFT's
The Mechanical Parallelization of Loop Nests Containing while Loops.
Classifying loops for space-time mapping
The loop parallelizer LooPo.
On the space-time mapping of a class of divide-and- conquer recursions
Parallelization of divide-and-conquer by translation to nested loops
Introduction to Parallel Computing: Design and Analysis of Algorithms.
A view of systolic design.
Loop parallelization in the polytope model.
On the parallelization of loop nests containing while loops.
Loop parallelization.
OPERA: A toolbox for loop parallelization.
A Constructive Theory of Multidimensional Arrays.
An algebraic model for divide-and-conquer algorithms and its parallelism
Parallel Computing.
Systolic Algorithms and Architectures.
Foundations of Parallel Programming.
Applications of a strategy for designing divide-and-conquer algorithms
Control generation in the design of processor arrays.
Parallel implementations of combinations of broadcast
--TR
Applications of a strategy for designing divide-and-conquer algorithms
Algorithms
Array expansion
Scans as Primitive Parallel Operations
Predicate calculus and program semantics
Control generation in the design of processor arrays
Regular partitioning for synthesizing fixed-size systolic arrays
Algorithmic skeletons
Introduction to parallel computing
Automatic parallelization of <italic>while</italic>-loops using speculative execution
Foundations of parallel programming
From transformations to methodology in parallel program development
Haskell
Dependence Analysis
Loop Parallelization
Loop Transformations for Restructuring Compilers
The Design and Analysis of Computer Algorithms
Systematic Extraction and Implementation of Divide-and-Conquer Parallelism
Transformation of Divide MYAMPERSANDamp; Conquer to Nested Parallel Loops
Classifying Loops for Space-Time Mapping
Systematic Efficient Parallelization of Scan and Other List Homomorphisms
Loop Parallelization in the Polytope Model
Automatic Parallelization in the Polytope Model
Upwards and Downwards Accumulations on Trees
OPERA
On the parallelization of loop nests containing while loops
Parallel Implementations of Combinations of Broadcast, Reduction and Scan | polytope model;divide-and-conquer;higher-order function;SPMD;loop nest;homomorphism;parallelization;skeletons |
272790 | Approximate Inverse Techniques for Block-Partitioned Matrices. | This paper proposes some preconditioning options when the system matrix is in block-partitioned form. This form may arise naturally, for example, from the incompressible Navier--Stokes equations, or may be imposed after a domain decomposition reordering. Approximate inverse techniques are used to generate sparse approximate solutions whenever these are needed in forming the preconditioner. The storage requirements for these preconditioners may be much less than for incomplete LU factorization (ILU) preconditioners for tough, large-scale computational fluid dynamics (CFD) problems. The numerical experiments show that these preconditioners can help solve difficult linear systems whose coefficient matrices are highly indefinite. | Introduction
Consider the block partitioning of a matrix A, in the form
(1)
where the blocking naturally occurs due the ordering of the equations and the variables.
Matrices of this form arise in many applications, such as in the incompressible Navier-Stokes
equations, where the scalar momentum equations and the continuity condition
form separate blocks of equations. In the 2-D case, this is a system of the form
A =B @
pC A =B @
f u
f pC A (2)
where u and v represent the velocity components, and p represents the pressure. Here, the
B submatrix is a convection-diffusion operator, the F submatrices are pressure gradient
operators, and the E submatrices are velocity divergence operators.
Traditional techniques such as the Uzawa algorithm have been used for these problems,
often because the linear systems that must be solved are much smaller, or because there
are zeros or small values on the diagonal of the fully-coupled system. These so-called
segregated approaches, however, suffer from slow convergence rates when compared to
aggregated, or fully-coupled solution techniques.
Another source of partitioned matrices of the form (1) is the class of domain decomposition
methods. In these methods the interior nodes of a subdomain are ordered
consecutively, subdomain after subdomain, followed by the interface nodes ordered at
the end. This ordering of the unknowns gives rise to matrices which have the following
structure: 0
Typically, the linear systems associated with the B matrix produced by this reordering
are easy to solve, being the result of restricting the original PDE problem into a set of
independent and similar PDE problems on much smaller meshes. One of the motivations
for this approach is parallelism. This approach ultimately requires solution methods for
the Schur complement S. There is a danger, however, that for general matrices, B may
be singular after the reordering.
Much work has been done on exploiting some form of blocking in conjunction with
preconditioning. In one of the earlier papers on the subject, Concus, Golub, and Meurant
introduce the idea of block preconditioning, designed for block-tridiagonal matrices
whose diagonal blocks are tridiagonal. The inverses of tridiagonal matrices encountered
in the approximations are themselves approximated by tridiagonal matrices, exploiting an
exact formula for the inverse of a tridiagonal matrix. This was later extended to the more
general case where the diagonal blocks are arbitrary [4, 17]. In many of these cases, the
incomplete block factorizations are developed for matrices arising from the discretization
of PDE's [2, 3, 7, 17, 19] and utilize approximate inverses when diagonal blocks need to
be inverted. More recently, Elman and Silvester [13] proposed a few techniques for the
specific case of the Stokes and Navier-Stokes problems. A number of variations of Block-
preconditioners have also been developed [1, 9]. In these techniques the off-block
diagonal terms are either neglected or an attempt is made to approximate their effect.
This paper explores some preconditioning options when the matrix is expressed in
block-partitioned form, either naturally or after some domain decomposition type re-
ordering. The iterative method acts on the fully-coupled system, but the preconditioning
has some similarity to segregated methods. This approach only requires preconditioning
or approximate solves with submatrices, where the submatrices correspond to any combination
of operators, such as reaction, diffusion, and convection. It is particularly advantageous
to use the block-partitioned form if we know enough about the submatrices to
apply specialized preconditioners, for example operator-splitting and semi-discretization,
as well as lower-order discretizations.
Block-partitioned techniques also require the sparse approximate solution to sparse
linear systems. These solutions need to be sparse because they form the rows or columns
of the preconditioner, or are used in further computations. Dense solutions here will
cause the construction or the application of the preconditioner to be too expensive. This
problem is ideally suited for sparse approximate inverse techniques. The approximate
solution to the sparse system is found by
using an iterative method implemented with sparse matrix-sparse vector and sparse
vector-sparse vector operations. The intermediate and final solutions are forced to be
sparse by numerically dropping elements in x with small magnitudes. If the right-hand-
side b and the initial guess for x are sparse, this is a very economical method for computing
a sparse approximate solution. We have used this technique to construct preconditioners
based on approximating the inverse of A directly [6].
This paper is organized as follows. In Section 2 we describe the sparse approximate
inverse algorithm and some techniques for finding sparse approximate solutions with the
Schur complement. Section 3 describes how block-partitioned factorizations may be used
as preconditioners. The most effective of these are the approximate block LU factorization
and the approximate block Gauss-Seidel preconditioner. Section 4 reports the results of
several numerical experiments, including the performance of the new preconditioners on
problems arising from the incompressible Navier-Stokes equations.
Sparse approximate inverses and their use
It is common when developing preconditioners based on block techniques to face the need
to compute an approximation to the inverse of a sparse matrix or an approximation to
columns of the f in which both B and f are sparse. This is particularly the
case for block preconditioners for block-tridiagonal matrices [7, 19]. For these algorithms
to be practical, they must provide approximations that are sparse.
A number of techniques have recently been developed to construct a sparse approximate
inverse of a matrix, to be used as a preconditioner [5, 6, 8, 10, 15, 17, 18]. Many
of these techniques approximate each row or column independently, focusing on (in the
column-oriented case) the individual minimizations
where e j is the j-th column of the identity matrix. Such a preconditioner is distinctly
easier than most existing preconditioners to construct and apply on a massively parallel
computer. Because they do not rely on matrix factorizations, these preconditioners often
are complementary to ILU preconditioners [6, 22].
Previous approaches select a sparsity pattern for x and then minimize (4) in a least
squares sense. In our approach, we minimize (4) with a method that reduces the residual
norm at each step, such as Minimal Residual or FGMRES [20], beginning with a sparse
initial guess. Sparsity is preserved by dropping elements in the search direction or current
solution at each step based on their magnitude or criteria related to the residual norm
reduction. The final number of nonzeros in each column is guaranteed to be not more
than the parameter lfil. In the case of FGMRES, the Krylov basis is also kept sparse
by dropping small elements. To keep the iterations economical, all computations are
performed with sparse matrix-sparse vector or sparse vector-sparse vector operations.
For our application here, we point out that the approximate inverse technique for
each column may be generalized to find a sparse approximate solution to the sparse linear
problem by minimizing
possibly with an existing preconditioner M for A.
2.1 Approximate inverse algorithm
We describe a modification of the technique reported in [6] that guarantees the reduction
of the residual norm at each minimal residual step. Starting with a sparse initial guess,
the fill-in is increased by one at each iteration. At the end of each iteration, it is possible
to use a second stage that exchanges entries in the solution with new entries if this causes
a reduction in the residual norm. Without the second stage, entries in the solution cannot
be annihilated once they have been introduced. For the problems in this paper, however,
this second stage has not been necessary.
In the first stage, the search direction d is derived by dropping entries from the residual
direction r. So that the sparsity pattern of the solution x is controlled, d is chosen to have
the same sparsity pattern as x, plus one new entry, the largest entry in absolute value.
Minimization is performed by choosing the steplength
(Ad; Ad)
and thus the residual norm for the new solution is guaranteed to be not more than the
previous residual norm. The solution and the residual is updated at the end of this
stage. If A is indefinite, the normal equations residual direction A T r may be used as the
search direction, or simply to determine the location of the new fill-in. It is interesting
to note that the largest entry in A T r gives the greatest residual norm reduction in a
one-dimensional minimization. This explains why a transpose initial guess for the approximate
inverse combined with self-preconditioning (preconditioning r with the current
approximate inverse) is so effective for some problems [6].
There are many possibilities for the second stage. We choose to drop one entry in
x and introduce one new entry in d if this causes a decrease in the residual norm. The
candidate for dropping is the smallest absolute nonzero entry in x. The candidate to be
added is the largest absolute entry in the previous search direction (at the beginning of
stage 1) not already included in d. The previous direction is used so that the candidate
may be determined in stage 1, and an additional search is not required. The steplength
fi is chosen by minimizing the new residual norm
where e i is the i-th coordinate vector, x s is the entry in x to be dropped at position s
while fi is the entry to be added at position l (largest), and we have generalized
the notation so that b is the right-hand-side vector, previously denoted m j . Let A j denote
the j-th column of A. Then the minimization gives
which just involves one sparse SAXPY since b \Gamma Ax is already available as r, and one sparse
dot-product, since we may scale the columns of A to have unit 2-norm. It is guaranteed
that s 6= l since l is chosen from among the entries not including s.
The preconditioned version of the algorithm for minimizing kb \Gamma Axk 2 with explicit
preconditioner M may be summarized as follows. A is assumed to be scaled so that its
columns all have unit 2-norm. The number of inner iterations is usually chosen to be lfil
or somewhat larger.
Algorithm 2.1 Approximate inverse algorithm
1. Starting with some initial guess x, r := b \Gamma Ax
2. For do
3. t := Mr
4. Choose d to be t with the same pattern as x;
one entry which is the
largest remaining entry in absolute value
5. q := Ad
6. ff := (r;q)
7. r := r \Gamma ffq
8. x
9. s := index of smallest nonzero in abs(x)
10. l := index of largest nonzero in abs(t \Gamma d)
11. fi := (r
12. ~ r := r
13. If k~rk ! krk then
14. Set x s := 0 and x l := fi
15. r := ~ r
16. End if
17. End do
2.2 Sparse solutions with the Schur complement
Sparse approximate solutions with the Schur complement are often
required in the preconditioning for block-partitioned matrices. We will briefly describe
three approaches in this section: (1) approximating S, (2) approximating S \Gamma1 , and (3)
exploiting a partial approximate inverse of A.
2.2.1 Approximating S
To approximate S with a sparse matrix, we can use
~
where Y is computed by the approximate inverse technique, possibly preconditioned with
whatever we are using to solve with B. Since Y is sparse, ~
S computed this way is also
sparse. Moreover, since S is usually relatively dense, solving with ~
S is an economical
approach. Typically, a zero initial guess is used for Y . We remark that it is usually too
expensive to form Y by solving B approximately and then dropping small elements,
since it is rather costly to search for elements to drop. We also note that we can generate ~
column-by-column, and if necessary, compute a factorization of ~
S on a column-by-column
basis as well. The linear systems with ~ S can be solved in any fashion, including with an
iterative process with or without preconditioning.
2.2.2 Approximating S \Gamma1
Another method is to compute an approximation to S \Gamma1 using the idea of induced pre-
conditioning. is the (2,2) block of
\GammaS
we can compute a sparse approximation to it by using the approximate inverse technique
applied to the last block-column of A and then throwing away the upper block. In practice,
the upper part of each column may be discarded before computing the next column. In
our experiments, since the approximate inverse algorithm is applied to A, an indefinite
matrix in most of the problems, the normal equations search direction A T r is used in the
algorithm, with a scaled identity initial guess for the inverse.
2.2.3 Partial approximate inverse
A drawback of the above approach is that the top submatrix of the last block-column
is discarded, and that the resulting approximation of S \Gamma1 may actually contain very few
nonzeros. A related technique is to compute the partial approximate inverse of A in the
last block-row. This technique does not give an approximation to S \Gamma1 , but defines a
simple preconditioning method itself. Writing the inverse of A in the form,
we can then get an approximate solution to A
y
with
f
It is not necessary to solve accurately with B. Again, the normal equations search direction
is used for the approximate inverse algorithm in the numerical experiments. Some
results of this relatively inexpensive method will be given in Section 4.
Block-partitioned factorizations of A
We consider a sparse linear system
which is put in the block form,
y
For now the only condition we require on this partitioning is that B be nonsingular. We
use extensively the following block LU factorization of A,
I
in which S is the Schur complement,
As is well-known, we can solve (12) by solving the reduced system,
to compute y, and then back-substitute in the first block-row of the system (11) to obtain
x, i.e., compute x by
The above block structure can be exploited in several different ways to define preconditioners
for A. Thus, the block preconditioners to be defined in this section combine
one of the preconditioners for S seen in Section 2.2 and a choice of a block factorization.
Next, we describe a few such options.
3.1 Solving the preconditioned reduced system
A method that is often used is to solve the reduced system (14), possibly with the help
of a certain preconditioner M S for the Schur complement matrix S. Although this does
not involve any of the block factorizations discussed above, it is indirectly related to it
and to other well-known algorithms. For example, the Uzawa method which is typically
formulated on the full system, can be viewed as a Richardson (or fixed point) iteration
applied to the reduced system. The matrix S need not be computed explicitly; instead,
one can perform the matrix-vector product with the matrix S, via the following
sequence of operations:
1. Compute
2. Solve
3. Compute
If we wish to use a Krylov subspace technique such as GMRES on the preconditioned
reduced system, we need to solve the systems in Step 2, exactly, i.e., by a direct solver or
an iterative solver requiring a high accuracy. This is because the S matrix is the coefficient
matrix of the system to be solved, and it must be constant throughout the GMRES
iteration. We have experimented with this approach and found that this is a serious
limitation. Convergence is reached in a number of steps which is typically comparable
with that obtained with methods based on the full matrix. However, each step costs much
more, unless a direct solution technique is used, in which case the initial LU factorization
may be very expensive. Alternatively, a highly accurate ILU factorization can be employed
for B, to reduce the cost of the many systems that must be solved with it in the successive
outer steps.
3.2 Approximate block diagonal preconditioner
One of the simplest block preconditioners for a matrix A partitioned as in (1) is the
block-diagonal matrix
in which MC is some preconditioning for the matrix C. If as is the case for the
incompressible Navier-Stokes equations, then we can define I for example. An
interesting particular case is when C is nonsingular and MC = C. This corresponds to a
block-Jacobi iteration. In this case, we have
the eigenvalues of which are the square roots of the eigenvalues of the matrix C
Convergence will be fast if all these eigenvalues are small.
3.3 Approximate block LU factorization
The block factorization (12) suggests using preconditioners based on the block LU factor-
ization
in which
and
to precondition A. Here M S is some preconditioner to the Schur complement matrix S. If
we had a sparse approximation ~
S to the Schur complement S we could compute a preconditioning
matrix M S to ~
S, for example, in the form of an approximate LU factorization.
We must point out here that any preconditioner for S will induce a preconditioner for A.
As was discussed in Section 3.1 a notable disadvantage of an approach based on solving
the reduced system (14) by an iterative process is that the action of S on a vector must
be computed very accurately in the Krylov acceleration part. In an approach based on
the larger system (11) this is not necessary. In fact any iterative process can be used for
solving with M S and B provided we use a flexible variant of GMRES such as FGMRES
[20].
Systems involving B may be solved in many ways, depending on their difficulty and
what we know about B. If B is known to be well-conditioned, then triangular solves with
incomplete LU factors may be sufficient. For more difficult B matrices, the incomplete
factors may be used as a preconditioner for an inner iterative process for B. Further, if
the incomplete factors are unstable (see Section 4.2), an approximate inverse for B may
be used, either directly or as a preconditioner. If B is an operator, an approximation to it
may be used; its factors may again be used either directly or as a preconditioner. This kind
of flexibility is typical of what is available for using iterative methods on block-partitioned
matrices.
An important observation is that if we solve exactly with B then the error in this
block ILU factorization lies entirely in the (2,2) block since,
One can raise the question as to whether this approach is any better than one based on
solving the reduced system (14) preconditioned with M S . It is known that in fact the
two approaches are mathematically equivalent if we start with the proper initial guesses.
Specifically, the initial guess should make the x-part of the residual vector equal to 0 for
the original system (11), i.e., the initial guess is
with
This result, due to Eisenstat and reported in [16], immediately follows from (16) which
shows that the preconditioned matrix has the particular form,
Thus, if the initial residual has its x-component equal to zero then all iterates will be
vectors with y components only, and a GMRES iteration on the system will reduce to a
GMRES iteration with the matrix M \Gamma1
involving only the y variable.
There are many possible options for choosing the matrix M S . Among these we consider
the following ones.
ffl no preconditioning on S.
ffl precondition with the C matrix if it is nonsingular. Alternatively we can
precondition with an ILU factorization of C.
construct a sparse approximation to S and use it as a preconditioner. In
general, we only need to approximate the action of S on a vector, for example, with
the methods described in Sections 2.2.1 and 2.2.2.
The following algorithm applies one preconditioning step to
to get
y
Algorithm 3.1 Approximate block LU preconditioning
1. x
2. y
3. x :=
We have experimented with a number of options for solving systems with M S in step 2
of the algorithm above. For example, M S may be approximated with ~
computed by the approximate inverse technique. If this approximation is
used, it is possible to also use Y in place of B \Gamma1 F in step 3.
3.4 Approximate block Gauss-Seidel
By ignoring the U factor of the approximate block LU factorization, we are led to a form
of block Gauss-Seidel preconditioning, defined by
The same remarks on the ways to solve systems with B and ways to define the preconditioning
matrix M S apply here. The algorithm for this preconditioner is the same as
Algorithm 3.1 without step 3.
To analyze the preconditioner, we start by observing that
showing that the only difference with the preconditioned matrix (17) is the additional
in the (1,2) position. The iterates associated with the block form and those
of the associated Schur complement approach M \Gamma1
are no longer simply related.
However there are a few connections between (17) and (19). First, the spectra of the two
matrices are identical. This does not mean, however, that the two matrices will require
the same number of iterations to converge in general.
Consider a GMRES iteration to solve the preconditioned system M
Here, we take an initial guess of the form
in which x 0 is arbitrary. With this we denote the preconditioned initial residual by
Then GMRES will find a vector u of the form belonging to the Krylov
subspace
which will minimize kM . For an arbitrary u in the affine space
the preconditioned residual is of the form
and by (19) this becomes,
As a result,
ks
Note that ks
represents the preconditioned residual norm for the reduced
system for the y obtained from the approximation of the large system. We have
ks
which implies that if the residual for the bigger system is less than ffl, then the residual
obtained by using a full GMRES on the associated preconditioned reduced system
will also be less than ffl. We observe in passing that the second term in
the right-hand-side of (21) can always be reduced to zero by a post-processing step which
consists of forcing the first part of the residual to be zero by changing ffi (only) into:
Equivalently, once the current pair x; y is obtained, x can be recomputed by satisfying
the first block equation, i.e.,
This post-processing step requires only one additional B solve.
Assume now that we know something about the residual vector associated with m
steps of GMRES applied to the preconditioned reduced system. Can we say something
about the residual norm associated with the preconditioned unreduced system? We begin
by establishing a simple lemma.
Lemma 3.1 Let
Then, the following equality holds
Proof. First, it is easy to prove that
in which Y We now multiply both members of the above equality
I \Gamma Z to obtain,
'We now state the main result concerning the comparison between the two approaches.
Theorem 3.1 Assume that the reduced system (14) is solved with GMRES using the
preconditioner M S starting with an arbitrary initial guess y 0 and let s
the preconditioned residual obtained at the m-th step. Then the preconditioned residual
vector r m+1 obtained at the (m 1)-st step of GMRES for solving the block system (11)
preconditioned with the matrix M of (18) and with an initial guess u
in which x 0
is arbitrary satisfies the inequality
In particular if s
Proof. The preconditioned matrix for the unreduced system is of the form (22) with
S. The residual vector s m of the m-th GMRES approximation
associated with the reduced system is of the form,
in which ae m is the m-th residual polynomial, which minimizes kp(G)s 0 k 2 among all polynomials
p of degree m satisfying the constraint:
Consider the polynomial of degree m+ 1 defined by
It is clear that
The residual of um+1 , the m+1-st approximate solution obtained by the GMRES algorithm
for solving the preconditioned unreduced system minimizes p(Z)r 0 over all polynomials p
of degree m+ 1 which are consistent, i.e., such that Therefore,
Using the equality established in the lemma, we now observe that
The first matrix in the right-hand-side of the last equality is nothing but I \Gamma Z. Hence,
the residual vector r m+1 is such that
which completes the proof. 2
It is also interesting to relate the convergence of this algorithm to that of the block-diagonal
approach in the particular case when M case corresponds to a block
Gauss-Seidel iteration. We can exploit Young and Frankel's theory for 2-cyclic matrices
to compare the convergence rates of this and the block Jacobi approach. Indeed, in this
case, we have from (19) that
Therefore, the eigenvalues of this matrix are the squares of those of matrix I
associated with the block-Jacobi preconditioner of Section 3.2.
4 Numerical Experiments
This section is organized as follows. In Section 4.1 we describe the test problems and
list the methods that we use. In Section 4.2, we illustrate for comparison purposes the
difficulty of incomplete LU factorizations for solving these problems in a fully-coupled
manner. In Section 4.3, we make some comments in regard to domain decomposition
types of reorderings. In Section 4.4 we show some results of the new preconditioners on a
simple PDE problem. Finally, in Sections 4.5 and 4.6, we present the results of the new
preconditioners on more realistic problems arising from the incompressible Navier-Stokes
equations.
Linear systems were constructed so that the solution is a vector of all ones. A zero
initial guess for right-preconditioned FGMRES [20] restarted every 20 iterations was used
to solve the systems. The Tables show the number of iterations required to reduce the
residual norm by 10 \Gamma7 . The iterations were stopped when 300 matrix-vector multiplications
were reached, indicated by a dagger (y). The codes were written in FORTRAN 77
using many routines from SPARSKIT [23], and run in single precision on a Cray C90
supercomputer.
4.1 Test problems and methods
The first set of test problems is a finite difference Laplace equation with Dirichlet boundary
conditions. Three different sized grids were used. The matrices were reordered using a
domain decomposition reordering with 4 subdomains. In the following tables, n is the
order of the matrix, nnz is the number of nonzero entries, nB is the order of the B
submatrix, and nC is the order of the C submatrix.
The second set of test matrices were extracted from the example incompressible Navier-Stokes
problems in the FIDAP [14] package. All problems with zero C submatrix were
tested. In the case of transient problems, the matrices are the Jacobians when the Newton
iterations had converged. The matrices are reordered so that the continuity equations are
Grid n nnz nB nC
by
48 by 48 2209 10857 2116 93
by 64 3969 19593 3844 125
Table
1: Laplacian test problems.
ordered last. The scaling of many of the matrices are poor, since each matrix contains
different types of equations. Thus, we scale each row to have unit 2-norm, and then scale
each column the same way. The problems are all originally nonsymmetric except 4, 12,
14 and 32.
Matrix n nnz nB nC
Hamel flow
EX12 3973 79078 2839 1134 Stokes flow
Surface disturbance attenuation
EX23 1409 42761 1008 401 Fountain flow
coating
EX26 2163 74465 1706 457 Driven thermal convection
EX28 2603 77031 1853 750 Two merging liquids
species deposition
Radiation heat transfer
EX36 3079 53099 2575 504 Chemical vapor deposition
Table
2: FIDAP example matrices.
The third set of test problems is from a finite-element discretization of the square lid-
driven cavity problem. Rectangular elements were used, with biquadratic basis functions
for velocities, and linear discontinuous basis functions for pressure. We will show our
results for problems with Reynolds number 0, 500, and 1000. All matrices arise from a
mesh of 20 by 20 elements, leading to matrices of size having nnz =138,187
nonzero entries. These matrices have 3363 velocity unknowns, and 1199 pressure un-
knowns. The matrices are scaled the same way as for the FIDAP matrices-the problems
are otherwise very difficult to solve.
We will use the following names to denote the methods that we tested.
ILUT(nfil) and ILUTP(nfil) Incomplete LU factorization with threshold of nfil nonzeros
per row in each of the L and U factors. This preconditioner will be described in
Section 4.2.
PAR(lfil) Partial approximate inverse preconditioner described in Section 2.2.3, using
lfil nonzeros per row in M 2 .
ABJ Approximate block-Jacobi preconditioner described in Section 3.2. This preconditioner
only applies when C 6= 0.
ABLU(lfil) Approximate block LU factorization preconditioner described in Section 3.3.
The approximation (6) to S with lfil nonzeros per column of Y was used.
ABLU y(lfil) Same as above, but using Y whenever B needs to be applied in step
3 of Algorithm 3.1.
ABLU s(lfil ) Approximate block LU factorization preconditioner, using (7) to approximate
S \Gamma1 with lfil nonzeros per column when approximating the last block column
of the inverse of A.
ABGS(lfil) Approximate block Gauss-Seidel preconditioner described in Section 3.4.
The approximation (6) to S with lfil nonzeros per column of Y was used.
The storage requirements for each preconditioner are given in Table 3. The ILUT
preconditioner to be described in the next subsection requires considerably more storage
than the approximate block-partitioned factorizations, since its storage depends on n
rather than nC . Because the approximation to S \Gamma1 discards the upper block, the storage
for it is less than lfil \Thetan C . The storage required for ~
S is more difficult to estimate since it
is at least the product of two sparse matrices. It is generally less than 2 \Theta lfil \Theta nC ; Table
11 in Section 4.5 gives the exact number of nonzeros in ~
S for the FIDAP problems.
4.2 ILU for the fully-coupled system
We wish to compare our new preconditioners with the most general, and in our experi-
ence, one of the most effective general-purpose preconditioners for solving the fully-coupled
system. In particular, we show results for ILUT, a dual-threshold, incomplete LU factorization
preconditioner based on a drop-tolerance and the maximum number of new fill-in
elements allowed per row in each L and U factor. This latter threshold allows the storage
for the preconditioner to be known beforehand. Drop-tolerance ILU rather than level-fill
ILU is often more effective for indefinite problems where numerical values play a much
more important role. A variant that performs column pivoting, called ILUTP, is even
more suitable for highly indefinite problems.
Matrices Matrix locations
ABJ none none
S less than 2 \Theta lfil \Theta nC
ABLU y(lfil) ~
lfil \Theta nC
S less than 2 \Theta lfil \Theta nC
Table
3: Storage requirements for each preconditioner.
We use a small modification that we have found to often perform better and rarely
worse on matrices that have a wide ranging number of elements per row or column. This
arises for various reasons, including the fact that the matrix contains the discretization of
different equations. Instead of counting the number of new fill-ins, we keep the nonzeros
in each row of L and U fixed at nfil , regardless of the number of original nonzeros in that
row. We also found better performance when keeping nfil constant rather than having it
increase or decrease as the factorization progresses.
If A is highly indefinite or has large nonsymmetric parts, an ILU factorization often
produces unstable L and U factors, i.e., k(LU) \Gamma1 k can be extremely large, caused by the
long recurrences in the forward and backward triangular solves [11]. To illustrate this
point, we computed for a number of factorizations the rough lower bound
where e is a vector of all ones. For the FIDAP example matrix EX07 modeling natural
convection with order 1633 and 46626 nonzeros, we see in Table 4 that the norm bound
increases dramatically as nfil is decreased in the incomplete factorization. GMRES could
not solve the linear systems with these factorizations as the preconditioner. This matrix
we chose is a striking example because it can be solved without preconditioning.
log
Table
4: Estimate of k(LU) \Gamma1 k1 from ILUT factors for EX07.
To illustrate the difficulty of solving the FIDAP problems with ILUTP, we progressively
allowed more fill-in until the problem could be solved, incrementing nfil in multiples
of 10, with no drop tolerance. The results are shown in Table 5. For these types of prob-
lems, it is typical that very large amounts of fill-in must be used for the factorizations to
be successful. An iterative solution was not attempted if the LU condition lower bound
was greater than 10 . If a zero pivot must be used, ILUT and ILUTP attempt to complete
the factorization by using a small value proportional to the norm of the row. The
matrices were taken in their original banded ordering, where the degrees of freedom of a
node or element are numbered together. As discussed in the next subsection, this type of
ordering having low bandwidth is often essential for an ILU-type preconditioning-many
problems including these cannot be solved otherwise.
Matrix nfil
EX06 50
Table
5: nfil required to solve FIDAP problems with ILUTP.
We should note that ILUTP is occasionally worse than ILUT. This can be alleviated
somewhat by using a low value of mbloc, a parameter in ILUTP that determines how
far to search for a pivot. In summary, indefinite problems such as these arising from the
incompressible Navier-Stokes equations may be very tough for ILU-type preconditioners.
4.3 Domain decomposition reordering considerations
Graph partitioners subdivide a domain into a number of pieces and can be used to give the
domain decomposition reordering described in Section 1. This is a technique to impose
a block-partitioned structure on the matrix, and adapts it for parallel processing, since
B is now a block-diagonal matrix. This technique is also useful if B is highly indefinite
and produces an unstable LU factorization; by limiting the size of the factorization, the
instability cannot grow beyond a point for which the factorization is not useful. For
general, nonsymmetric matrices, the partitioner may be applied to a symmetrized graph.
In
Table
6 we show some results of ILUT(40) on the Driven cavity problem with
different matrix reorderings. We used the original unblocked ordering where the degrees
of freedom of the elements are ordered together, the blocked ordering where the continuity
equations are ordered last, and a domain decomposition reordering found using a simple
automatic recursive dissection procedure with four subdomains. This latter ordering found
nodes internal to the subdomains, and 882 interface nodes.
Re. Unblocked Blocked DD ordered
1000 78 y 51
Table
Effect of ordering on ILUT for Cavity problems.
The poorer quality of the incomplete factorization for the Driven cavity problems in
block-partitioned form is due to the poor ordering rather than instability of the L and U
factors; in fact, zero pivots are not encountered. For the problem with Reynolds number
0, the unblocked format produces 745,187 nonzeros in the strictly lower-triangular part
during the incomplete factorization (which is then dropped down to less than n\Theta
nonzeros) while the block-partitioned format produces 2,195,688 nonzeros, almost
three times more.
The factorization for the domain decomposition reordered matrices encounters many
zero pivots when it reaches the (2,2) block. These latter orderings do not necessarily
cause ILUT to fill-in zeros on the diagonal. Nevertheless, the substitution of a small pivot
described above seems to be effective here. The domain decomposition reordering also
reduces the amount of fill-in because of the shape of the matrix (a downward pointing
arrow). Combined with its tendency to limit the growth of instability, the results show
this reordering is advantageous even on serial computers.
In
Table
7 we compare the difficulty of solving the B and ~
S subsystems for the blocked
and domain decomposition reorderings of the Driven cavity problems. ~
S was computed
as ~
computed using the approximate inverse technique with lfil
of 30. Here we used ILUT(30) and only solved the linear systems to a tolerance of 10 \Gamma5 .
Solves with these submatrices in the block-partitioned preconditioners usually need to be
much less accurate. In most of the experiments that follow, we used unpreconditioned
iterations to a tolerance of 10 \Gamma1 or 100 matrix-vector multiplications to solve with B and
~
S. Other methods would be necessary depending on the difficulty of the problems. The
table gives an idea of how difficult it is to solve with B and ~
S, and again shows the
advantage of using domain decomposition reorderings for hard problems.
Re. Blocked DD ordered
1000 y y 7
Table
7: Solving with B and ~
S for different orderings of A.
4.4 Test results for the Laplacian problem
In
Tables
8 and 9 we present the results for the Laplacian problem with three different grid
sizes, using no preconditioning, approximate block diagonal, partial approximate inverse,
approximate block LU, and approximate block Gauss-Seidel preconditioners. Note that
in
Table
9, an lfil of zero for the approximate block LU and Gauss-Seidel preconditioners
respectively indicate the preconditioners
I
and
Grid NOPRE ABJ PAR
by
48 by 48 367 50 29 21 19 17
by 64 532 57 36 33 25 20
Table
8: Test results for the Laplacian problem.
Grid ABLU ABGS
by
48 by 48 17
by 64 19 20
Table
9: Test results for the Laplacian problem.
4.5 Test results for the FIDAP problems
For the block-partitioned factorization preconditioners, unpreconditioned GMRES, restarted
every 20 iterations, was used to approximately solve the inner systems involving B and ~
by reducing the initial residual norm by a factor of 0.1, or using up to 100 matrix-vector
multiplications. Solves with the matrix B are usually not too difficult because for most
problems, it is positive definite. A zero initial guess for these solves was used. The results
for a number of the preconditioners with various options are shown in Table 10. The best
preconditioner appears to be ABLU using Y for B \Gamma1 F is better than solving a system
with B very inaccurately. The number of nonzeros in ~
S is small, as illustrated by Table
11 for two values of lfil.
Matrix
Table
10: Test results for the FIDAP problems.
4.6 Test results for the Driven cavity problems
The driven cavity problems are much more challenging because the B block is no longer
positive definite, and in fact, acquires larger and larger negative eigenvalues as the Reynolds
number increases. For these problems, the unpreconditioned GMRES iterations with B
were done to a tolerance of 10 \Gamma3 or a maximum of 100 matrix-vector multiplications.
Again, ABLU y appears to be the best preconditioner. The results are shown in Table
lfil
EX26 13395 21468
EX36 13621 21063
Table
11: Number of nonzeros in ~
S.
ABLU ABLU y ABGS
1000 y y 164 118 y y
Table
12: Test results for the Driven cavity problems.
Conclusions
We have presented a few preconditioners which are defined by combining two ingredients:
(1) a sparse approximate inverse technique for obtaining a preconditioner for the Schur
complement or a part of the inverse of A, and (2) a block factorization for the full sys-
tem. The Schur complement S which appears in the block factorization is approximated
by its preconditioner. Approximate inverse techniques [6] are used in different ways to
approximate either S directly or a part of A \Gamma1 .
As can be seen by comparing Tables 5 and 10, we can solve more problems with the
block approach than with a standard ILU factorization. In addition, this is typically
achieved with a far smaller memory requirement than ILUT or a direct solver. The better
robustness of these methods is due to the fact that solves are only performed for small
matrices. In effect, we are implicitly using the power of the divide-and-conquer strategy
which is characteristic of domain decomposition methods. The smaller matrices obtained
from the block partitioning can be preconditioned with a standard ILUT approach. The
larger matrices use a block-ILU, and the glue between the two is the preconditioning of
the Schur complement.
Acknowledgements
The authors wish to acknowledge the support of the Minnesota
Supercomputer Institute which provided the computer facilities and an excellent environment
to conduct this research.
--R
Incomplete block matrix factorization preconditioning methods.
Iterative Solution Methods.
On approximate factorization methods for block matrices suitable for vector and parallel processors.
Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems.
Approximate inverse preconditioners for general sparse matri- ces
Block preconditioning for the conjugate gradient method.
Approximate inverse preconditioning for sparse linear systems.
Parallelizable block diagonal preconditioners for the compressible Navier-Stokes equations
A new approach to parallel preconditioning with sparse approximate inverses.
A stability analysis of incomplete LU factorizations.
Multigrid and Krylov subspace methods for the discrete Stokes equa- tions
Fast nonsymmetric iterations and preconditioning for Navier-Stokes equations
FIDAP: Examples Manual
Parallel preconditioning and approximate inverses on the Connection Machine.
A comparison of domain decomposition techniques for elliptic partial differential equations and their parallel implementation.
On a family of two-level preconditionings of the incomplete block factorization type
Factorized sparse approximate inverse precon- ditionings I
Incomplete block factorizations as preconditioners for sparse SPD matrices.
A flexible inner-outer preconditioned GMRES algorithm
ILUT: A dual threshold incomplete LU factorization.
Preconditioned Krylov subspace methods for CFD applications
SPARSKIT: a basic tool kit for sparse matrix computations
--TR
--CTR
N. Guessous , O. Souhar, Multilevel block ILU preconditioner for sparse nonsymmetric M-matrices, Journal of Computational and Applied Mathematics, v.162 n.1, p.231-246, 1 January 2004
Kai Wang , Jun Zhang, Multigrid treatment and robustness enhancement for factored sparse approximate inverse preconditioning, Applied Numerical Mathematics, v.43 n.4, p.483-500, December 2002
Prasanth B. Nair , Arindam Choudhury , Andy J. Keane, Some greedy learning algorithms for sparse regression and classification with mercer kernels, The Journal of Machine Learning Research, 3, 3/1/2003
Edmond Chow , Michael A. Heroux, An object-oriented framework for block preconditioning, ACM Transactions on Mathematical Software (TOMS), v.24 n.2, p.159-183, June 1998
Howard C. Elman , Victoria E. Howle , John N. Shadid , Ray S. Tuminaro, A parallel block multi-level preconditioner for the 3D incompressible Navier--Stokes equations, Journal of Computational Physics, v.187 n.2, p.504-523, 20 May
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | sparse approximate inverse;block-partitioned matrix;navier-stokes;schur complement;preconditioning |
272879 | Circuit Retiming Applied to Decomposed Software Pipelining. | AbstractThis paper elaborates on a new view on software pipelining, called decomposed software pipelining, and introduced by Gasperoni and Schwiegelshohn, and by Wang, Eisenbeis, Jourdan, and Su. The approach is to decouple the problem into resource constraints and dependence constraints. Resource constraints management amounts to scheduling an acyclic graph subject to resource constraints for which an efficiency bound is known, resulting in a bound for loop scheduling. The acyclic graph is obtained by cutting some particular edges of the (cyclic) dependence graph. In this paper, we cut edges in a different way, using circuit retiming algorithms, so as to minimize both the longest dependence path in the acyclic graph, and the number of edges in the acyclic graph. With this technique, we improve the efficiency bound given for Gasperoni and Schwiegelshohn algorithm, and we reduce the constraints that remain for the acyclic problem. We believe this framework to be of interest because it brings a new insight into the software problem by establishing its deep link with the circuit retiming problem. | Introduction
OFTWARE PIPELINING is an instruction-level loop
scheduling technique for achieving high performance on
processors such as superscalar or VLIW (Very Long Instruction
Word) architectures. The main problem is to
cope with both data dependences and resource constraints
which make the problem NP-complete in general. The software
pipelining problem has motivated a great amount of
research. Since the pioneering work of Rau and Glaeser [1],
several authors have proposed various heuristics [2], [3], [4],
[5], [6] in various frameworks. An extended survey on software
pipelining is provided in [7].
Recently, a novel approach for software pipelining, called
decomposed software pipelining, has been proposed simultaneously
by Gasperoni and Schwiegelshohn [8], and by
Wang, Eisenbeis, Jourdan, and Su [9]. The idea is to decompose
the NP-complete software pipelining problem into
two subproblems: a loop scheduling problem ignoring resource
constraints, and an acyclic graph scheduling prob-
lem, for which efficient techniques (such as list scheduling
for example) are well known. Although splitting the problem
in two subproblems is clearly not an optimal strat-
egy, Wang, Eisenbeis, Jourdan and Su have demonstrated,
through an experimental evaluation on a few loops from
the Livermore Benchmark Kernels, that such an approach
P.-Y. Calland is supported by a grant of R'egion Rh"one-Alpes. A.
Darte and Y. Robert are supported by the CNRS-ENS Lyon-INRIA
project ReMaP.
The authors are with Laboratoire LIP, URA CNRS 1398, Ecole
Normale Sup'erieure de Lyon, F-69364 LYON Cedex 07, e-mail:
[Pierre-Yves.Calland,Alain.Darte,Yves.Robert]@ens-lyon.fr
is very promising with respect to time efficiency and space
efficiency.
In both approaches, the technique is to pre-process the
loop data dependence graph (which may include cycles) by
cutting some dependence edges. After the pre-processing,
the modified graph becomes acyclic and classical scheduling
techniques can be applied on it to generate the "pattern"
(or "kernel") of a software pipelining loop. However, the
way edges are cut is ad-hoc in both cases, and no general
framework is given that explains which edges could and/or
should be cut. The main contribution of this paper is to
establish that this pre-processing of the data dependence
graph has a deep link with the circuit retiming problem.
The paper is organized as follows: in Section II, we described
more precisely our software pipelining model. In
Section III, we recall the main idea of decomposed software
pipelining: we illustrate this novel technique through
Gasperoni and Schwiegelshohn algorithm, and we show
that decomposed software pipelining can be re-formulated
in terms of retiming algorithms that are exactly the tools
needed to perform the desired edges cut. Then, we demonstrate
the interest of our framework by addressing two optimization
problems:
ffl In Section IV, we show how to cut edges so that the
length of the longest path in the acyclic graph is minimized.
With this technique, we improve the performance bound
given for Gasperoni and Schwiegelshohn algorithm.
ffl In Section V, we show how to cut the maximal number
of edges, so as to minimize the number of constraints
remaining when processing the acyclic graph. This criteria
is not taken into account, neither in Gasperoni and
Schwiegelshohn algorithm, nor in Wang, Eisenbeis, Jourdan
and Su algorithm.
Finally, we discuss some extensions in Section VI. We
summarize our results and give some perspectives in Section
VII.
II. A simplified model for the software
pipelining problem
We first present our assumptions before discussing their
motivations.
A. Problem formulation
In this paper, we consider a loop composed of several
operations that are to be executed a large number of times
on a fine-grain architecture. We assume both resource
constraints and dependence constraints, but, compared to
more general frameworks, we make the following simplifying
hypotheses:
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. XX, NO. Y, MONTH 1996
Resources. The architecture consists in p identical, non
pipelined, resources. The constraint is that, in each cy-
cle, the same resource cannot be used more than once.
Dependences. Dependences between operations are captured
by a doubly weighted graph ffi). V is the
set of vertices of G, one vertex per operation in the loop.
E is the set of edges in G. For each edge e in E, d(e) is
a nonnegative integer, called the dependence distance. For
each vertex u in V , ffi(u) is a nonnegative integer, called the
delay. ffi and d model the fact that for each edge
the operation v at iteration i has to be issued at least ffi(u)
cycles after the start of the operation u at iteration
We assume that the sum of dependence distances along any
cycle is positive.
These hypotheses are those given by Gasperoni and
Schwiegelshohn in the non pipelined case. Before discussing
the limitations of this simplified model, let us illustrate
the notion of operations, iterations, delays, and
dependence distance, with the following example. We will
work on this example throughout the paper.
ENDDO
The loop has 6 operations (A, B, C, D, E, and F ) and
N iterations, each operation is executed N times. N is
a parameter, of unknown value, possibly very large. The
associated graph G is given in Figure 1. Delays are depicted
in square boxes, for example the delay of operation F is 10
times greater than the delay of operation A. Dependence
distances express the fact that some computations must be
executed in a specified order so as to preserve the semantics
of the loop. For example, operation A at iteration k writes
a(k), hence it must precede computation B at iteration
which reads this value. This constraint is captured
by the label equal to 2 associated to the edge (A; B) in the
dependence graph of Figure 1.
A
F
121414Fig. 1. An example of dependence graph G
The software pipelining problem is to find a schedule oe
that assigns an issue time oe(u; for each operation instance
operation u at iteration k). Each edge in
the graph gives rise to a constraint for scheduling:
Valid schedules are those schedules satisfying both dependence
constraints (expressed by Equation 1) and resource
constraints. Because of the regular structure of the software
pipelining problem, we usually search for a cyclic (or
modulo) scheduling oe: we aim at finding a nonnegative integer
(called the initiation interval of oe) and constants
c u such that oe(u;
Because the input loop is supposed to execute many iterations
(N is large), we focus on the asymptotic behavior
of oe. The initiation interval is a natural performance estimator
of oe, as 1= measures oe's throughput. Note that if
the reduced dependence graph G is acyclic and if the target
machine has enough processors, then can be zero (this
type of schedule has infinite throughput).
A variant consists in searching for a nonnegative rational
a=b and to let oe(u; (with rational
constants c u ). This amounts to unroll the input loop by a
factor b. We come back on this variant in Section VI. Note
also that rational cyclic schedules are dominant in the case
of unlimited resources [4].
B. Limitations of the model
Compared to more sophisticated models, where more
general programs may be handled (such as programs with
conditionals) and where more accurate architecture description
can be given, our framework is very simple and
may seem unrealistic. We (partly) agree but we would like
to raise the following arguments:
Delays. In many frameworks, the delay is defined on edges
and not on vertices as in our model. We chose the latter for
two reasons. First, we want to compare our technique to
Gasperoni and Schwiegelshohn algorithm which uses delays
on vertices. Second, we use graph retiming techniques that
are also commonly defined with delays on vertices. How-
ever, we point out that retiming in a more general model
is possible, but technically more complex. See for example
[10, Section 9].
Resources. Our architecture model, with non pipelined and
identical resources, is very simple. The main reason for
this restricted hypothesis is that we want to demonstrate,
from a theoretical point of view, that our technique allows
to derive an efficiency bound, as for Gasperoni and
Schwiegelshohn algorithm.
From a practical point of view, our technique can still be
used, even for more sophisticated resource models. Indeed,
the retiming technique that we use is independent of the
architecture model, and can be seen as a pre-loop transfor-
mation. Resource constraints are taken into account only
in the second phase of the algorithm: additional features
on the architecture can then be considered when scheduling
the acyclic graph obtained after retiming. In particular,
such a technique can be easily integrated, regardless of the
architecture details, in a compiler that has an instruction
scheduler.
Extensions of the model. Decomposed software pipelining
is still a recent approach for software pipelining. It has thus
been studied first from a theoretical point of view, and only
on restricted models. This is also the case in this paper.
The whole problem is (not yet) well understood enough
P.-Y. CALLAND, A. DARTE AND Y. ROBERT: CIRCUIT RETIMING APPLIED TO DECOMPOSED SOFTWARE PIPELINING 3
to allow more general architecture features be taken into
account. However, we believe that our new view on the
problem, in particular the use of retiming for controlling
the structure of the acyclic graph, will lead in the future to
more accurate heuristics on more sophisticated architecture
models.
III. Going from cyclic scheduling to acyclic
scheduling
Before going into the details of Gasperoni and
Schwiegelshohn heuristic (GS for short), we recall some
properties of cyclic schedules, and the main idea of decomposed
software pipelining, so as to make the rest of the
presentation clearer.
A. Some properties of cyclic scheduling
Consider a dependence graph d) and a
cyclic schedule oe, oe(u; that satisfies both
dependence constraints and resource constraints. Such a
cyclic schedule is periodic, with period : the computation
scheme is reproduced every units of time. More pre-
cisely, if instance (u; is assigned to begin at time t, then
instance will begin at time t + . Therefore, we
only need to study a slice of clock cycles to know the
behavior of the whole cyclic schedule in steady state.
Let us observe such a slice, e.g. the slice SK from clock
cycle K up to clock cycle (K
enough so that the steady state is reached. Figure 2 depicts
the steady state of a schedule, for the graph of Figure 1,
with initiation interval
time
resources
A,k
F,k
initiation interval
Fig. 2. Successive slices of a schedule for graph G. Boxes in grey
represent the operation initiated in the same slice SK .
Now, for each perform the Euclidean division of
c u by : c
This means that one and only one instance of operation u
is initiated within the slice SK : it is instance
issued r u clock cycles after the beginning of the slice. The
quantities r u and q u are similar to the row and column
numbers introduced in [9].
If the schedule is valid, both resource constraints and dependence
constraints are satisfied. Dependence constraints
can be separated in two types depending on how they are
either two dependent operation instances are initiated
in the same slice SK (type 1) or they are initiated
in two different slices (type 2). Of course, the partial dependence
graph induced by type 1 constraints is acyclic,
because type 1 dependences impose a partial order on the
operations, according to the order in which they appear
within the slice. In Figure 2, arrows represent
pendences. All other dependences (not depicted in the fig-
are type 2 dependences.
The main idea of Gasperoni and Schwiegelshohn algorithm
(GS), and more generally of decomposed software
pipelining, is the following. Assume that we have a valid
cyclic schedule of period 0 for a given number p 0 of pro-
cessors, and that we want to deduce a valid schedule for
a smaller number p of processors. A way of building the
new schedule is to keep the same slice structure, i.e. to
keep the same operation instances within a given slice. Of
course we might need to increase the slice length to cope
with the reduction of resources. In other words, we have
to stretch the rectangle of size 0 \Theta p 0 to build a rectangle
of size \Theta p. Using this idea, type 2 dependences will
still be satisfied if we choose large enough. Only type 1
dependences have to be taken into account for the internal
reorganization of the slice (see Figure 3). But since the
corresponding partial dependence graph is acyclic, we are
brought back to a standard acyclic scheduling problem for
which many theoretical results are known. In particular,
a simple list scheduling technique provides a performance
bound (and the shorter the longest path in the graph, the
more accurate the performance bound).
time
resources
A,k
F,k
time
A,k
Fig. 3. Two different allocations of a slice of graph G (p
Once this main principle is settled, there remain several
open questions:
1. How to choose the initial scheduling? For which 0 ?
2. How to choose the reference slice? There is no reason a
priori to choose a slice beginning at a clock cycle congruent
to 0 modulus 0 .
3. How to decide that an edge is of type 1, hence to be
considered in the acyclic problem?
These three questions are of course linked together. Intu-
itively, it seems important to (try to) minimize both
ffl the length of the longest path in the acyclic graph, which
should be as small as possible as it is tightly linked to the
performance bound obtained for list scheduling, and
ffl the number of edges in the acyclic graph, so as to reduce
the dependence constraints for the acyclic scheduling
problem.
4 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. XX, NO. Y, MONTH 1996
We will give a precise formulation to these questions and
give a solution. Beforehand, we review the choices of GS.
B. The heuristic of Gasperoni and Schwiegelshohn
In this section we explain with full details the GS heuristic
[8]. The main idea is as outlined in the previous section.
The choice of GS for the initial scheduling is to consider
the optimal cyclic scheduling for an infinite number of processors
constraints.
B.1 Optimal schedule for unlimited resources
Consider the cyclic scheduling problem d)
without resource constraints
Let be a nonnegative integer. Define from G an edge-weighted
graph G 0
ffl Vertices of G 0
to V a new vertex s:
ffl Edges of G 0
an edge from s to all other ver-
tices:
ffl Weight of edges of G 0
We have the following well-known result:
Lemma 1: is a valid initiation interval , G 0
has no
cycle of positive weight. Furthermore, if G 0
has no cycle
of positive weight, and if t(s; u) denotes the length of the
longest path, in G 0
, from s to u, then oe(u;
is a valid cyclic schedule.
Lemma 1 has two important consequences:
ffl First, given an integer , it is easy to determine if is a
valid initiation interval and if yes, to build a corresponding
cyclic schedule by applying Bellman-Ford algorithm [11] on
.
ffl The optimal initiation interval 1 is the smallest non-negative
integer such that G 0
has no positive cy-
cle. Therefore,
maxfd
C cycle of Gg otherwise. Furthermore, a binary
search combined with Bellman-Ford algorithm computes
1 in polynomial time.
B.2 Algorithm GS for p resources
As said before, in the case of p identical processors,
the algorithm consists in the conversion of the dependence
graph G into an acyclic graph G a . G a is obtained by deleting
some edges of G. As initial scheduling, GS takes the
optimal scheduling with unlimited resources
As reference slice, GS takes a slice starting at a clock cycle
congruent to 0 modulus 1 , i.e. a slice from clock cycle
K1 up to clock cycle (K This amounts to
decomposing t(s; u) into
In other words r Consider an edge
In the reference slice, the operation instance
, the
operation instance of v which is performed within the reference
slice, namely (v; started before the end
of the operation (u; K \Gamma q u ). Hence this operation instance
not the one that depends upon completion
of In other words, K \Gamma q
The two operations in dependence through edge e are not
initiated in the same slice. Edge e can be safely considered
as a type 2 edge, and thus can be deleted from G. This is
the way edges are cut in GS heuristic 1 . We are led to the
following algorithm:
Algorithm 1: (Algorithm GS)
1. Compute the optimal cyclic schedule oe 1 for unlimited
resources.
2. Let be an edge of G. Then e will be deleted
from G if and only if
This provides the acyclic graph G a .
3. (a) Consider the acyclic graph G a where vertices are
weighted by ffi and edges represent task dependences, and
perform a list scheduling oe a on the p processors.
(b) Let be the makespan (i.e.
the total execution time) of the schedule for G a .
4. For all
t(s; u)
is a valid cyclic schedule.
The correctness of Algorithm GS can be found in [8]. It can
also be deduced from the correctness of Algorithm CDR
(see Section IV-B.1).
B.3 Performances of Algorithm GS
GS gives an upper bound to the initiation interval obtained
by Algorithm 1. Let opt be the optimal (smallest)
initiation interval with p processors. The following inequality
is established:
where \Phi is the length of the longest path in G a . Moreover,
owing to the strategy for cutting edges, \Phi
where in [8]). This
implies which leads to
opt
opt
GS is the first guaranteed algorithm. We see from equation
(2) that the bound directly depends upon \Phi, the length
of the longest path in G a .
Example. We go back to our example. Assume
processors. The graph G is the graph of Figure 1. We
have In Figure 4(a), we depict the graph G 0 12 .
The different values t(s; u) for all are given in circles
on the figure. The schedule oe 1 (u;
already represented in Figure 2: 4 processors are needed.
1 However, this is not the best way to determine type 2 edges. See
Section III-C.
P.-Y. CALLAND, A. DARTE AND Y. ROBERT: CIRCUIT RETIMING APPLIED TO DECOMPOSED SOFTWARE PIPELINING 5
Figure
4(b) shows the acyclic graph G a obtained by cutting
edges
r Finally, Figure 4(c) shows a possible
schedule of operations provided by a list scheduling.
The initiation interval of this solution is
A
F
A
A,k
time
resources
(a)
(b)
(c)
Fig. 4. (a): The graph G 0(b): The acyclic graph Ga
corresponding list scheduling allocation:
C. Cutting edges by retiming
Let us summarize Algorithm GS as follows: first compute
the values t(s; u) in G 0
1 to provide the optimal scheduling
without resource constraints oe 1 . Then take a reference
slice starting at clock cycle 0
Finally, delete from G some edges that necessarily correspond
to dependences between different slices: only those
edges are removed by
algorithm GS.
However, edges that correspond to dependences between
different slices are those such that q u
within the reference slice, the scheduled computation instances
are
Therefore, the computation (v; depends
upon performed in the same slice iff
wise, it is performed in a subsequent slice, and in this case
Let us check this mathematically, for an arbitrary slice.
Consider a valid cyclic scheduling oe(u; (with
6= 0), and let c
where t 0 is given. For each edge the dependence
constraint is satisfied, thus r u
r
Finally, dividing by , we get
Furthermore, if q v then the dependence
constraints directly writes r u . We thus have:
ae
Therefore, the condition for cutting edges corresponding to
dependences between different slices (i.e. those we called
Furthermore, if an edge is cut by GS, then
it is also cut by our new rule. We are led to a modified
version of GS which we call mGS. Since we cut more edges
in mGS than in GS, the acyclic graph mG a obtained by
mGS contains a subset of the edges of the acyclic graph
G a . See
Figure
5 to illustrate this fact.
A
A,k
time
resources
(a) (b)
Fig. 5. (a): The acyclic graph provided by Algorithm mGS
(b): A corresponding list scheduling allocation:
Actually, we need neither an initial ordering nor a reference
slice any longer. What we only need is to determine
a function
Then, we define the acyclic graph mG a as follows: an edge
in mG a iff q(v)
Clearly, mG a is acyclic (assume there is a cycle, and sum
up the quantities q(v) q(u) on this cycle to get a
contradiction). Finally, given mG a , we list schedule it as a
DAG whose vertices are weighted by the initial ffi function.
Such a function q is called a retiming in the context
of synchronous VLSI circuits [10]. Given a graph
d), q performs a transformation of G into a new
graph G d q ) where d q is defined as follows: if
is an edge of E then
d q
This transformation can be interpreted as follows: if d(e)
represents the number of "registers" on edge e, a retiming
q amounts to suppress q(u) registers to each edge leaving
u, and to add q(v) registers to each edge entering v. A
retiming is said valid if for each edge e of E, d q (e) 0 (at
least one register per edge in G q , see Equation 3). Edges
such that d q are edges "with no register". Note that
we assumed that the sum of the d(e) on any cycle of G is
positive: using VLSI terminology, we say G is synchronous.
We are now ready to formulate the problem. Recall that
our goal was to answer the following two questions:
ffl How to cut edges so as to obtain an acyclic graph G a
whose longest path has minimal length?
ffl How to cut as many edges as possible so that the number
of dependence constraints to be satisfied by the list-
scheduling of G a is minimized?
6 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. XX, NO. Y, MONTH 1996
using our new formulation, we can state our objectives
more precisely in terms of retiming:
Objective 1 Find a retiming q that minimizes the longest
path in mG a , i.e. in terms of retiming, that minimizes the
clock period \Phi of the retimed graph.
Objective 2 Find a retiming q so that the number of edges
in mG a is minimal, i.e. distribute registers so as to leave
as few edges with no register as possible.
In Section IV, we show how to achieve the first objective.
There are several possible solutions, and in Section V, we
show how to select the best one with respect to the second
objective, and we state our final algorithm. We improve
upon GS for two reasons: first we have a better bound,
and second we cut more edges, hence more freedom for the
list scheduling.
IV. Minimizing the longest path of the acyclic
graph
There are well-known retiming algorithms that can be
used to minimize the clock period of a VLSI circuit, i.e.
the maximal weight (in terms of delay) of a path with no
register. We first recall these algorithms, due to Leiserson
and Saxe [10], then we show how they can be applied to
decomposed software pipelining.
A. Retiming algorithms
We first need some definitions. We denote by u P
a path P of G from u to v, by d(P
e2P d(e) the
sum of the dependences of the edges of P , and by ffi(P
v2P ffi(v) the sum of the delays of the vertices of P . We
define D and \Delta as follows:
D and \Delta are computed by solving an all-pairs shortest-path
algorithm on G where edge u e
weighted with
the pair (d(e); \Gammaffi(u)). Finally, let
path of G; d(P
\Phi(G) is the length of the longest path of null weight in G
(and is called the clock period of G in VLSI terminology).
Theorem 1: (Theorem 7 in [10]) Let d) be
a synchronous circuit, let be an arbitrary positive real
number, and let q be a function from V to the integers.
Then q is a legal retiming of G such that \Phi(G q ) if and
only if
1. for every edge u e
2.
that \Delta(u; v) ? .
Theorem 1 provides the basic tool to establish the following
algorithm (Algorithm 2) that determines a retiming
such that the clock period of the retimed graph is minimized
Algorithm 2: (Algorithm OPT1 in [10])
1. Compute D and \Delta (see Algorithm WD in [10]).
2. Sort the elements in the range of \Delta.
3. Binary search among the elements \Delta(u; v) for the minimum
achievable clock period. To test whether each potential
clock period is feasible, apply the Bellman-Ford
algorithm to determine whether the conditions in Theorem
1 can be satisfied.
4. For the minimum achievable clock period found in step
3, use the values for the q(v) found by the Bellman-Ford
algorithm as the optimal retiming.
This algorithm runs in O(jV j 3 log jV j), but there
is a more efficient algorithm whose complexity is
O(jV jjEj log jV j), which is a significant improvement for
sparse graphs. It runs as the previous algorithm except in
step 3 where the Bellman-Ford algorithm is replaced by the
following algorithm:
Algorithm 3: (Algorithm FEAS in [10]) Given a synchronous
d) and a desired clock period
, this algorithm produces a retiming q of G such that G q
is a synchronous circuit with clock period \Phi , if such a
retiming exists.
1. For each vertex set q(v) to 0.
2. Repeat the following
(a) Compute graph G q with the existing values for q.
(b) for any vertex v 2 V compute \Delta 0 (v) the maximum
sum ffi(P ) of vertex delays along any zero-weight directed
path P in G leading to v. This can be done in O(jEj).
(c) For each vertex v such that \Delta 0 (v) ? , set q(v) to
3. Run the same algorithm used for step 2(b) to compute
\Phi. If \Phi ? then no feasible retiming exists. Otherwise, q
is the desired retiming.
B. A new scheduling algorithm: Algorithm CDR
We can now give our new algorithm and prove that both
resource and dependence constraints are met.
Algorithm 4: (Algorithm CDR) Let d) be a
dependence graph.
1. Find a retiming q that minimizes the length \Phi of the
longest path of null weight in G q (use Algorithm 2 with
the improved algorithm for step 3).
2. Delete edges of positive weight, or equivalently keep
edges
edges with no register). By this way, we obtain an acyclic
graph G a .
3. Perform a list scheduling oe a on G a and compute
4. Define the cyclic schedule oe by:
Note that the complexity of Algorithm CDR is determined
by Step 1 whose complexity is O(jV jjEj log(jV j)).
In comparison, the complexity of Algorithm GS is
O(jV jjEj log(jV jffi max )). The difference comes from the fact
that \Phi opt can be searched among the jV j 2 values \Delta(u; v)
whereas 1 is searched among all values between 0 and
algorithms have similar
complexities.
P.-Y. CALLAND, A. DARTE AND Y. ROBERT: CIRCUIT RETIMING APPLIED TO DECOMPOSED SOFTWARE PIPELINING 7
B.1 Correctness of Algorithm CDR
Theorem 2: The schedule oe obtained with Algorithm
CDR meets both dependence and resource constraints.
Proof: Resource constraints are obviously met because
of the list scheduling and the definition of , which
ensures that slices do not overlap. To show that dependence
constraints are satisfied for each of E, we
need to verify
, oe a (u)
, oe a
On one hand, suppose that e is not deleted, i.e. e 2 G a .
It is equivalent to say that the weight of e after the retiming
is equal to zero: q(v) since oe a is a
schedule for G a :
oe a (u)
Thus, inequality (4) is satisfied.
On the other hand, if e is deleted, then
. But, by
definition of we have
oe a (u)
Thus, inequality (4) is satisfied.
B.2 Performances of Algorithm CDR
Now, we use the same technique as in [8] in order to show
that our algorithm is also guaranteed and we give an upper
bound for the initiation interval that is smaller than the
bound given for Algorithm GS.
Theorem 3: Let G be a dependence graph, \Phi opt the minimum
achievable clock period for G, the initiation interval
of the schedule generated by Algorithm CDR when p processors
are available, and opt the best possible initiation
interval for this case. Then
opt
'' \Phi opt
opt
Proof: By construction, \Phi opt is the length of the
longest path in G a , thus with the same proof technique as
in [8], i.e a list scheduling technique, we can prove that
which leads to the desired inequality.
Now we show that the bound obtained for Algorithm
CDR (Theorem 3) is always better than the bound for Algorithm
GS (see Equation 2). This is a consequence of the
following lemma:
Lemma 2: 1 \Phi opt
Proof: Let us apply Algorithm CDR with unlimited
resources. For that, we define a retiming q such that
and we define the graph G a by deleting from
G all edges e such that d q (e) ? 0. Then, we define a schedule
for G a with unlimited resources by oe a
P path of G a leading to ug. The makespan of oe a is \Phi opt by
construction. Finally, we get a schedule for G by defining
tion, the smallest initiation interval for
Now, consider an optimal cyclic schedule oe for unlimited
resources, oe(u; as defined in Section III-
B.1. Let
As
proved in Section III-C, q defines a valid retiming for G, i.e.
for all edges
G a by deleting from G all edges e such that d q (e) ? 0
(as in Algorithm mGS). Let P be any path in G a ,
Summing up these
By construction, \Phi(G q ) is the length of the longest path
in G a , thus \Phi(G q Finally, we have
\Phi opt \Phi(G q ), hence the result.
Theorem 4: The performance upper bound given for Algorithm
CDR is better than the performance upper bound
given for Algorithm GS.
Proof: This is easily derived from the fact that \Phi opt
shown by Lemma 2.
Note: this bound is a worst case upper bound for the initiation
interval. It does not prove however that CDR is
always better than GS.
Example. We can now apply Algorithm CDR to our key
example (assume again available processors). \Phi
14 and the retiming q that achieves this clock period is
obtained in two steps by Algorithm 3: q
Figures 6(a), 6(b) and 6(c)
show the successive retimed graphs. Figure 6(d) shows
the corresponding acyclic graph G a and finally, Figure 6(e)
shows a possible schedule of operations provided by a list
scheduling technique, whose initiation interval is
This is better than what we found with Algorithm mGS
(see
Figure
5(b)), and with Algorithm GS (see Figure 4(c)).
B.3 Link between 1 and \Phi opt
As shown in Lemma 2, 1 and \Phi opt are very close. How-
ever, the retiming that can be derived from the schedule
with initiation interval 1 does not permit to define an
acyclic graph with longest path \Phi opt . In other words, looking
for 1 is not the right approach to minimizing the
period of the graph. In this section, we investigate more
deeply this fact, by recalling another formulation of the
retiming problem given by Leiserson and Saxe [10].
Lemma 3: (Lemma 9 in [10]) Let d) be a
synchronous circuit, and let c be a positive real number.
8 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. XX, NO. Y, MONTH 1996
A
F
F
F
(b)
(c)
A
A,k
B,k C,k
time
resources
(d) (e)
Fig. 6. (a): Initial dependence graph G
(b) and (c): Successive steps of retiming used in CDR
Corresponding acyclic graph
corresponding list scheduling allocation:
Then there exists a retiming q of G such that \Phi(G q ) if
and only if there exists an assignment of a real value s(v)
and an integer value q(v) to each vertex v 2 V such that
the following conditions are satisfied:
\Gammas(v) \Gammaffi(v) for every vertex
s(v) for every vertex
such that q(u) \Gamma
By letting every vertex u, inequalities
5 are equivalent to:
such that q(u) \Gamma
This last system permits to better understand all the techniques
that we developed previously:
Optimal schedule for unlimited resources. As seen in
Lemma 2, the schedule oe(u;
System 6 with
except the second inequality. We do have r(v)
but not necessarily r(v)
and in this case,
proof).
Algorithm CDR for unlimited resources. By construc-
tion, with the retiming such that \Phi(G q
and System 6 is satisfied with the smallest value
for . Therefore, this technique leads to the better cyclic
schedule with unlimited resources for which the slices do
not overlap (because of the second inequality). It is not
always possible to find 1 this way.
Algorithms CDR and GS for p resources. The schedule
obtained satisfies System 6 with the makespan of
oe a . For CDR, q is the retiming that achieves the optimal
period, whereas for GS, q is the retiming defined from 1
1 c). For CDR, the fourth inequality is satisfied
exactly for all edges
However, for GS, oe is required to satisfy the fourth inequality
for more edges than necessary (actually for all edges
such that r(u)+ ffi(u) r(v)). Note that for both
algorithms, there are additional conditions imposed by the
resource constraints that do not appear in System 6.
V. Minimizing the number of edges of the
acyclic graph
Our purpose in this section is to find a retimed graph
with the minimum number of null weight edges among all
retimed graphs whose longest path has the best possible
length \Phi opt . Removing edges of non null weight will give
an acyclic graph that matches both objectives stated at the
end of Section III-C.
Example. Consider step 1 of Algorithm CDR in which we
use the retiming algorithm of Leiserson and Saxe [10]. The
final retiming does minimize the length \Phi of the longest
path of null weight, but it does not necessarily minimize the
number of null weight edges. See again our key example,
Figure
6 (c), for which 14. We can apply yet another
retiming to obtain the graph of Figure 7(a):
0, and
A
F
(c)
A
B,k
time
resources
Fig. 7. (a): The final retimed graph
(b): The corresponding acyclic graph
corresponding list scheduling allocation:
The length of the longest path of null weight is still
14, but the total number of null weight edges is smaller.
This implies that the corresponding acyclic graph G a (see
Figure
contains fewer edges than the acyclic graph
of
Figure
6(d) and therefore, is likely to produce a smaller
initiation interval 2 . That is the case in our example: we
find an initiation interval equal to 19 (see Figure 7(c)). It
turns out that = 19 is the best possible integer initiation
List scheduling a graph which is a subset of another graph will not
always produce a smaller execution time. But intuition shows that it
will in most practical cases (the fewer constraints, the more freedom).
P.-Y. CALLAND, A. DARTE AND Y. ROBERT: CIRCUIT RETIMING APPLIED TO DECOMPOSED SOFTWARE PIPELINING 9
interval with processors: the sum of all operation
delays is 37, and d 37
Recall that a retiming q such that \Phi(G q
integral solution to the following system (see formulation
of Theorem 1):
such that \Delta(u; v) ? \Phi opt
Among these retimings, we want to select one particular
retiming q for which the number of null weight edges in G q
is minimized. This can be done as follows:
Lemma 4: Let d) be a synchronous circuit.
A retiming q such that \Phi(G q and such that the
number of null weight edges in G q is minimized can be
found in polynomial time by solving the following integer
linear program:
min
such that \Delta(u; v) ? \Phi opt
Proof: Consider an optimal integer solution (q; v) to
System 8. q defines a retiming for G with \Phi(G q
since system 7 is satisfied: indeed q(v) \Gamma q(u)+d(e)+v(e)
Note that each v(e) is constrained by only one equation:
1. There are two cases:
ffl The weight of e in G q is null, i.e.
is the only possibility.
ffl The weight of e in G q is positive, i.e. q(v)\Gammaq(u)+d(e) 1
(recall that q and d are integers). In this case, the minimal
value for v is 0.
Therefore, given a retiming q,
e2E v(e) is minimal when
it is equal to the number of null weight edges in G q .
it remains to show that such an optimal integer
solution can be found in polynomial time. For that, we
System 8 in matrix form as minfcx j Ax bg:
I d
C I d
where C is the transpose of the jV j\ThetajEj-incidence matrix of
G, C 0 is the transpose of the incidence matrix of the graph
whose edges are the pairs (u; v) such that \Delta(u; v) ? \Phi opt
and I d is the jEj \Theta jEj identity matrix.
An incidence matrix (such as C) is totally unimodular
(see [12, page 274]). Then, it is easy to see that A is also
totally unimodular. Therefore, solving the ILP Problem 8
is not NP-complete: System 8 considered as an LP problem
has an integral optimum solution (Corollary 19.1a in [12])
and such an integral solution can be found in polynomial
time (Theorem 16.2 in [12]).
Let us summarize how this refinement can be incorporated
into our software pipelining heuristic: first, we compute
\Phi opt the minimum achievable clock period for G, then
we solve System 8 and we obtain a retiming q. We define
G a as the acyclic graph whose edges have null weight in
the longest path in G a is minimized and the number
of edges in G a is minimized. Finally, we schedule G a as in
Algorithm CDR. We call this heuristic the modified CDR
(or simply mCDR).
Remark: Solving System 8 can be expensive although
polynomial. An optimization that permits reducing the
complexity is to pre-compute the strongly connected components
G i of G and to solve the problem separately for
each component G i . Then, a retiming that minimizes the
number of null weight edges in G q is built by adding suitable
constants to each retiming q i so that all edges that
link different components have positive weights. Future
work will try to find a pure graph-theoretic approach to
the resolution of System 8, so that the practical complexity
of our software pipelining heuristic is decreased.
VI. Load balancing
We have restricted so far initiation intervals to integer
values. As mentioned in Section II, searching for rational
initiation intervals might give better results, but at the
price of an increase in complexity: searching for
b can
be achieved by unrolling the original loop nest by a factor of
b, thereby processing an extended dependence graph with
many more vertices and edges.
In this section, we propose a simple heuristic to alleviate
potential load imbalance between processors, and for which
there is no need to unroll the graph.
Remember the principle of the four previously described
heuristics (GS, mGS, CDR and mCDR). First, an acyclic
graph G a is built from G. Then, G a is scheduled by a
list scheduling technique. This defines the schedule oe a inside
each slice of length (the initiation interval). Finally,
slices are concatenated, a slice being initiated just after the
completion of the previous one.
The main weakness of this principle is that slices do not
overlap. Since the schedule in each slice has been defined
by an As-Soon-As-Possible (ASAP) list scheduling, what
usually happens is that many processors are idle during the
last time steps of the slice. The idea to remedy this problem
is to try to fill these "holes" in the schedule with the tasks of
the next slice. For that, instead of scheduling the next slice
with the same schedule oe a , we schedule it with an As-Late-
As-Possible (ALAP) so that "holes" may appear in the first
time steps of the slice. Then, between two successive slices,
processors are permuted so that the computational load is
(nearly) equally balanced when concatenating both slices.
Of course dependences between both slices must now be
taken into account.
A precise formulation of this heuristic can be found
in [13]. Here, we only illustrate it on our key example.
Example. Figure 7(c) shows a possible allocation of an instance
of G a provided by an ASAP list scheduling. Figure
8(a) shows an allocation provided by an ALAP list
scheduling and Figure 8(b) the concatenation of these two
instances. The initiation interval that we obtain is equal
to 37 for two instances. i.e. which is better
than the initiation interval obtained with Algorithm
Figure
7(c)). This cannot be improved further: the
two processors are always busy, as
2.
(a)
time
resources
B,k
(b)
B,k
A,k
D,k
C,k
F,k
Fig. 8. (a): ALAP scheduling
(b): Concatenation of two instances
Another possibility is to schedule, in an acyclic manner,
two (or more) slices instead of one, after the retiming as
been chosen. This is equivalent to unroll the loop after the
retiming has been performed. In this case, once the first
slice has been processed, the second slice can be allocated
the same way, taking into account dependence constraints
coming from the acyclic graph, plus dependences between
the two slices. In other words, the retiming can be seen
as a pre-loop transformation, that consists in changing the
structure of the sub-graph induced by loop independent
edges. Once this retiming has been done, any software
pipelining algorithm can still be applied.
VII. Conclusion
In this paper, we have presented a new heuristic for the
software pipelining problem. We have built upon results
of Gasperoni and Schwiegelshohn, and we have made clear
the link between software pipelining and retiming.
In the case of identical, non pipelined, resources, our
new heuristic is guaranteed, with a better bound than that
of [8]. Unfortunately, we cannot extend the guarantee to
the case of many different resources, because list scheduling
itself is not guaranteed in this case.
We point out that our CDR heuristic has a reasonable
complexity, similar to classical software pipelining algo-
rithms. As for mCDR, further work will be aimed at deriving
an algorithmic implementation that will not require the
use of Integer Linear Programming (even though the particular
instance of ILP invoked in mCDR is polynomial).
Finally, note that all edge-cutting heuristics lead to cyclic
schedules where slices do not overlap (by construction).
Our final load-balancing technique is a first step to overcome
this limitation. It would be very interesting to derive
methods (more sophisticated than loop unrolling) to
synthesize resource-constrained schedules where slices can
overlap.
Acknowledgments
The authors would like to thank the anonymous referees
for their careful reading and fruitful comments.
--R
"Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing,"
"Software pipelining; an effective scheduling technique for VLIW machines,"
"Perfect pipelining; a new loop optimization technique,"
"Cyclic scheduling on parallel proces- sors: an overview,"
"Fine-grain scheduling under resource con- straints,"
"A frame-work for resource-constrained, rate-optimal software pipelining,"
"Software pipelining,"
"Generating close to optimum loop schedules on parallel processors,"
"Decomposed software pipelining,"
"Retiming synchronous circuitry,"
Introduction to Algorithms
Theory of Linear and Integer Program- ming
"A new guaranteed heuristic for the software pipelining problem,"
--TR
--CTR
Han-Saem Yun , Jihong Kim , Soo-Mook Moon, Time optimal software pipelining of loops with control flows, International Journal of Parallel Programming, v.31 n.5, p.339-391, October
Dongming Peng , Mi Lu, On exploring inter-iteration parallelism within rate-balanced multirate multidimensional DSP algorithms, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.13 n.1, p.106-125, January 2005
Timothy W. O'Neil , Edwin H.-M. Sha, Combining Extended Retiming and Unfolding for Rate-Optimal Graph Transformation, Journal of VLSI Signal Processing Systems, v.39 n.3, p.273-293, March 2005
Timothy W. O'Neil , Edwin H.-M. Sha, Combining extended retiming and unfolding for rate-optimal graph transformation, Journal of VLSI Signal Processing Systems, v.39 n.3, p.273-293, March 2005
Alain Darte , Guillaume Huard, Loop Shifting for Loop Compaction, International Journal of Parallel Programming, v.28 n.5, p.499-534, Oct. 2000
Greg Snider, Performance-constrained pipelining of software loops onto reconfigurable hardware, Proceedings of the 2002 ACM/SIGDA tenth international symposium on Field-programmable gate arrays, February 24-26, 2002, Monterey, California, USA
Karam S. Chatha , Ranga Vemuri, Hardware-Software partitioning and pipelined scheduling of transformative applications, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.10 n.3, p.193-208, June 2002
R. Govindarajan , Guang R. Gao , Palash Desai, Minimizing Buffer Requirements under Rate-Optimal Schedule in Regular Dataflow Networks, Journal of VLSI Signal Processing Systems, v.31 n.3, p.207-229, July 2002 | software pipelining;circuit retiming;list scheduling;cyclic scheduling;modulo scheduling |
272885 | Abstractions for Portable, Scalable Parallel Programming. | AbstractIn parallel programming, the need to manage communication, load imbalance, and irregularities in the computation puts substantial demands on the programmer. Key properties of the architecture, such as the number of processors and the cost of communication, must be exploited to achieve good performance, but coding these properties directly into a program compromises the portability and flexibility of the code because significant changes are then needed to port or enhance the program. We describe a parallel programming model that supports the concise, independent description of key aspects of a parallel programincluding data distribution, communication, and boundary conditionswithout reference to machine idiosyncrasies. The independence of such components improves portability by allowing the components of a program to be tuned independently, and encourages reuse by supporting the composition of existing components. The isolation of architecture-sensitive aspects of a computation simplifies the task of porting programs to new platforms. Moreover, the model is effective in exploiting both data parallelism and functional parallelism. This paper provides programming examples, compares this work to related languages, and presents performance results. | Introduction
The diversity of parallel architectures puts the goals of performance and portability in conflict. Programmers
are tempted to exploit machine details-such as the interconnection structure and the granularity of
parallelism-to maximize performance. Yet software portability is needed to reduce the high cost of software
development, so programmers are advised to avoid making machine-specific assumptions. The challenge,
then, is to provide parallel languages that minimize the tradeoff between performance and portability. 1
Such languages must allow a programmer to write code that assumes no particular architecture, allow a
compiler to optimize the resulting code in a machine-specific manner, and allow a programmer to perform
architecture-specific performance tuning without making extensive modifications to the source code.
In recent years, a parallel programming style has evolved that might be termed aggregate data-parallel
computing. This style is characterized by:
ffl Data parallelism. The program's parallelism comes from executing the same function on many elements
of a collection. Data parallelism is attractive because it allows parallelism to grow-or scale-with the
number of data elements and processors. SIMD architectures exploit this parallelism at a very fine
grain.
ffl Aggregate execution. The number of data elements typically exceeds the number of processors, so
multiple elements are placed on a processor and manipulated sequentially. This is attractive because
placing groups of interacting elements on the same processor vastly reduces communication costs.
Moreover, this approach uses good sequential algorithms locally, which is often more efficient than
simply multiplexing parallel algorithms. Another benefit is that data can be passed between processors
in batches to amortize communication overhead. Finally, when a computation on one data element is
delayed waiting for communication, other elements may be processed.
ffl Loose synchrony. Although strict data parallelism applies the "same" function to every element, local
variations in the nature or positioning of some elements can require different implementations of the
same conceptual function. For instance, data elements on the boundary of a computational domain
have no neighbors with which to communicate, but data parallelism normally assumes that interior
and exterior elements be treated the same. By executing a different function on the boundaries, these
exceptional cases can be easily handled.
These features make the aggregate data-parallel style of programming attractive because it can yield
efficient programs when executed on typical MIMD architectures. However, without linguistic support this
style of programming promotes inflexible programs through the embedding of performance-critical features
as constants, such as the number of processors, the number of data elements, boundary conditions, the
processor interconnection, and system-specific communication syntax. If the machine, its size, or the problem
size changes, significant program changes to these fixed quantities are generally required. As a consequence,
1 We consider a program to be portable with respect to a given machine if its performance is competitive with
machine-specific programs solving the same problem [2].
several languages have been introduced to support key aspects of this style. However, unless all aspects of
this style are supported, performance, scalability, portability, or development cost can suffer.
For instance, good locality of reference is an important aspect of this programming style. Low-level
approaches [25] allow programmers to hand-code data placement. The resulting code typically assumes
one particular data decomposition, so if the program is ported to a platform that favors some other de-
composition, extensive changes must be made or performance suffers. Other languages [4, 5, 15] give the
programmer no control over data decomposition, leaving these issues to the compiler or hardware. But
because good data decompositions can depend on characteristics of the application that are difficult to determine
statically, compilers can make poor data placement decisions. Many recent languages [6, 22] provide
support for data decompositions, but hide communication operations from the programmer and thus do
not encourage locality at the algorithmic level. Consequently, there is a reliance on automated means of
hiding latency-multithreaded hardware, multiple lightweight threads, caches, and compiler optimizations
that overlap communication and computation-which cannot always hide all latency. The trend towards
relatively faster processors and relatively slower memory access times only exacerbates the situation.
Other languages provide inadequate control over the granularity of parallelism, requiring either one
data point per process [21, 43], assuming some larger fixed granularity [14, 29], or including no notion of
granularity at all, forcing the compiler or runtime system to choose the best granularity [15]. Given the
diversity of parallel computers, no particular granularity can be best for all machines. Computers such as
the CM-5 prefer coarse granularities; those such as the J Machine prefer finer granularity; and those such as
the MIT Alewife and Tera computer benefit from having multiple threads per process. Also, few languages
provide sufficient control over the algorithm that is applied to aggregate data, preferring instead to multiplex
the parallel algorithm when there are multiple data points on a processor [43, 44].
Many language models do not adequately support loose synchrony. The boundaries of parallel computations
often introduce irregularities that require significant coding effort. When all processes execute the
same code, programs become riddled with conditionals, increasing code size and making them difficult to
understand, hard to modify, and potentially inefficient. Programming in a typical MIMD-style language is
not much cleaner. For instance, writing a slightly different function for each type of boundary process is
problematic because a change to the algorithm is likely to require all versions to be changed.
In this paper we describe language abstractions-a programmingmodel-that fully support the aggregate
data-parallel programming style. This model can serve as a foundation for portable, scalable MIMD languages
that preserve the performance available in the underlying machine. Our belief is that for many tasks
programmers-and not compilers or runtime systems-can best handle the performance-sensitive aspects of
a parallel program. This belief leads to three design principles.
First, we provide abstractions that are efficiently implementable on all MIMD architectures, along with
specific mechanisms to handle common types of parallelism, data distribution, and boundary conditions.
Our model is based on a practical MIMD computing model called the Candidate Type Architecture (CTA)
[45].
Second, the insignificant but diverse aspects of computer architectures are hidden. If exposed to the
programmer, assumptions based on these characteristics can be sprinkled throughout a program, making
portability difficult. Examples of characteristics that are hidden include the details of the machine's communication
style and the processor (or memory) interconnection topology. For instance, one machine might
provide shared memory and another message passing, but either can be implemented with the other in
software.
Third, architectural features that are essential to performance are exposed and parameterized in an
architecture-independent fashion. A key characteristic is the speed, latency, and per-message overhead of
communication relative to computation. As the cost of communication increases relative to computation,
communication costs must be reduced by aggregating more processing onto a smaller number of processors,
or by finding ways to increase the overlap of communication and computation.
The result is the Phase Abstractions parallel programming model, which provides control over granularity
of parallelism, control over data partitioning, and a hybrid data and function parallel construct that supports
concise description of boundary conditions. The core of our solution is the ensemble construct that allows
a global data structure to be defined and distributed over processes, and allows the granularity-and the
location of data elements-to be controlled by load-time parameters. The ensemble also has a code form for
describing what operations to execute on which elements and for handling boundary conditions. Likewise,
interprocessor connections are described with a port ensemble that provides similar flexibility. By using
ensembles for all three components of a global operation-data, code and communication-they can be
scaled together with the same parameters. Because the three parts of an ensemble and the boundary
conditions are specified independently, reusability is enhanced.
The remainder of this paper is organized as follows. We first present our solution to the problem by
describing our architectural model and the basic language model-the CTA and the Phase Abstractions.
Section 3 then gives a detailed illustration of our abstractions, using the Jacobi Iteration as an example. To
demonstrate the expressiveness and programmability of our abstractions, Section 4 shows how simple array
language primitives can be built on top of our model. Section 5 discusses the advantages of our programming
model with respect to performance and portability, and Section 6 presents experimental evidence that the
Phase Abstractions support portable parallel programming. Finally, we compare Phase Abstractions with
related languages and models, and close with a summary.
Phase Abstractions
In sequential computing, languages such as C, Pascal and Fortran have successfully combined efficiency with
portability. What do these languages have in common that make them successful? All are based on a
model where a sequence of operations manipulate some infinite random-access memory. This programming
model succeeds because it preserves the characteristics of the von Neumann machine model, which itself has
been a reasonably faithful representation of sequential computers. While these models are never literally
implemented-unit-cost access to infinite memory is an illusion provided by virtual memory, caches and
backing store-the model is accurate for the vast majority of programs. There are only rare cases, such
as programs that perform extreme amounts of disk I/O, where the deviations from the model are costly
to the programmer. It is critical that the von Neumann model capture machine features that are relevant
to performance: If some essential machine features were ignored, better algorithms could be developed
using a more accurate machine model. Together, the von Neumann machine model and its accompanying
programming model allow languages such as C and Fortran to be both portable and efficient.
In the parallel world, we propose that the Candidate Type Architecture (CTA) play the role of the von
Neumann model, 2 and the Phase Abstractions the role of the programming model. Finally, the sequential
languages are replaced by languages based on the Phase Abstractions, such as Orca C [32, 34].
The CTA. The CTA [45] is an asynchronous MIMD model. It consists of P von Neumann processors
that execute independently. Each processor has its own local memory, and the processors communicate
through some sparse but otherwise unspecified communication network. Here "sparse" means that the
network has a constant degree of connectivity. The network topology is intentionally left unbound to provide
maximum generality. Finally, the model includes a global controller that can communicate with all processors
through a low bandwidth network. Logically, the controller provides synchronization and low bandwidth
communication such as a broadcast of a single value.
Although it is premature to claim that the CTA is as effective a model as the von Neumann model, it
does appear to have the requisite characteristics: It is simple, makes minimal architectural assumptions, but
captures enough significant features that it is useful for developing efficient algorithms. For example, the
CTA's unbound topology does not bias the model towards any particular machine, and the topologies of
existing parallel computers are typically not significant to performance. On the other hand, the distinction
between global and local memory references is key, and this distinction is clear in the CTA model. Finally,
the assumption of a sparse topology is realistic for all existing medium and large scale parallel computers.
The Phase Abstractions extend the CTA in the same way that the sequential imperative programming
2 The more recent BSP [48] and LogP [8] models present a similar view of a parallel machine and for the most part
suggest a similar way of programming parallel computers.
model extends the von Neumann model. The main components of the Phase Abstractions are the XYZ levels
of programming and ensembles [1, 19, 46].
2.1 XYZ Programming Levels
A programmer's problem-solving abilities can be improved by dividing a problem into small, manageable
pieces-assuming the pieces are sufficiently independent to be considered separately. Additionally, these
pieces can often be reused in other programs, saving time on future problems. One way to build a parallel
program from smaller reusable pieces is to compose a sequence of independently implemented phases, each
executing some parallel algorithm that contributes to the overall solution. At the next conceptual level, each
such phase is comprised of a set of cooperating sequential processes that implements the desired parallel
algorithm. Each sequential process may be developed separately. These levels of problem solving-program,
phase, and process, also called the Z, Y, and X levels-have direct analogies in the CTA.
The X level corresponds to the individual von Neumann processors of the CTA, and an X level program
specifies the sequential code that executes in one process. Because the model is MIMD, each process can
execute different code.
The Y level is analogous to the set of von Neumann processors cooperating to compute a parallel
algorithm, forming a phase. The Y-level may specify how the X-level programs are connected to each other
for inter-process communication. Examples of phases include parallel implementations of the FFT, matrix
multiplication, matrix transposition, sort, and global maximum. A phase has a characteristic communication
structure induced by the data dependencies among the processes. For example, the FFT induces a butterfly,
while Batcher's sort induces a hypercube [1].
Finally, the Z level corresponds to the actions of the CTA's global controller, where sequences of parallel
phases are invoked and synchronized. A Z level program specifies the high level logic of the computation
and the sequential invocation of phases (although their execution may overlap) that are needed to solve
complex problems. For example, the Car-Parrinello molecular dynamics code simulates the behavior of a
collection of atoms by iteratively invoking a series of phases that perform FFT's, matrix products, and other
computations [49]. In Z-Y-X order, these three levels provide a top-down view of a parallel program.
Example: XYZ Levels of the Jacobi Iteration. Figure 1 illustrates the XYZ levels of programming
for the Jacobi Iteration. The Z level consists of a loop that invokes two phases, one called Jacobi(), which
performs the over-relaxation, the other called Max(), which computes the maximum difference between
iterations that is used to test for termination.
Each Y level phase is of a collection of processes executing concurrently. Here, the two phases are graphically
depicted with squares representing processes and arcs representing communication between processes.
program Jacobi
data := Load();
while (error>Tolerance)
{
error := Max();
Z Level X Level
Y Level
{
for each (i,j) in local section
new(i,j)=(old(i,j+1)+old(i,j-1)+
{
local_max=Max(local_max, left_child);
local_max=Max(local_max, right_child);
Send local_max to parent;
Figure
1: XYZ Illustration of the Jacobi Iteration
The Jacobi phase uses a mesh interconnection topology, and the Max phase uses a binary tree. Other details
of the Y level, such as the distribution of data, are not shown in this figure but will be explained in the next
subsection.
Finally, a sketch of the X level program for the two phases is shown at the right of Figure 1. The X level
code for the Jacobi phase assigns to each data point the average of its four neighbors. The Max phase finds,
for all data points, the largest difference between the current iteration and the previous iteration. 2
A Z level program is basically a sequential program that provides control flow for the overall computation.
An X level program, in its most primitive form, can also be viewed as a sequential program with additional
operations that allow it to communicate with other processes. Although parallelism is not explicitly specified
at the X and Z levels, these two levels may still contain parallelism. For example, phase invocation may
be pipelined, and the X level processes can execute on superscalar architectures to achieve instruction-level
parallelism.
It is the Y level that specifies scalable parallelism and most clearly departs from a sequential program.
Ensembles support the definition and manipulation of this parallelism.
2.2 Ensembles
The Phase Abstractions use the ensemble structure to describe data structures and their partitioning, process
placement, and process interconnection. In particular, an ensemble is a partitioning of a set of elements-
data, codes, or port connections-into disjoint sections. Each section represents a thread of execution, so the
section is a unit of concurrency and the degree of parallelism is modulated by increasing or decreasing the
number of sections. Because all three aspects of parallel computation-data, code and communication-are
unified in the ensemble structure, all three components can be reconfigured and scaled in a coherent, concise
fashion to provide flexibility and portability.
A data ensemble is a data structure with a partitioning. At the Z level the data ensemble provides a
logically global view of the data structure. At the X level a portion of the ensemble is mapped to each
section and is viewed as a locally defined structure with local indexing. For example, the 6 \Theta 6 data ensemble
in
Figure
2 has a global view with indices [0 : 5] \Theta [0 : 5], and a local view of 3 \Theta 3 subarrays with indices
2]. The mapping of the global view to the local view is performed at the Y level and will be
described in Section 3. The use of local indexing schemes allows an X level process to refer to generic array
bounds rather than to global locations in the data space. Thus, the same X level source code can be used
for multiple processes.
Global View
Figure
2: A 6\Theta6 Array (left) and its corresponding Data Ensemble for a 2\Theta2 array of sections.
A code ensemble is a collection of procedures with a partitioning. The code ensemble gives a global view
of the processes performing the parallel computation. When the procedures in the ensemble differ the model
is MIMD; when the procedures are identical the model is SPMD. Figure 3 shows a code ensemble for the
Jacobi phase in which all processes execute the xJacobi() function.
Figure
3: Illustration of a Code Ensemble
Finally, a port ensemble defines a logical communication structure by specifying a collection of port name
pairs. Each pair of names represents a logical communication channel between two sections, and each of
these port names is bound to a local port name used at the X level. Figure 4 depicts a port ensemble for
the Jacobi phase. For example, the north port (N) of one process is bound to the south port (S) of its
neighboring process.
Figure
4: Illustration of a Port Ensemble
A Y level phase is composed of three components: a code ensemble, a port ensemble that connects the
code ensemble's processes, and data ensembles that provide arguments to the processes of the code ensemble.
The sections of each ensemble are ordered numerically so that the i th section of a code ensemble is bound to
the i th section of each data and port ensemble. This correspondence allows each section to be allocated to
a processor for normal sequential execution: The process executes on that processor, the data can be stored
in memory local to that processor, and the ports define connections for interprocessor communication.
Consequently, the i th sections of all ensembles are assigned to the same processor to maintain locality across
phases. If two phases share a data ensemble but require different partitionings for best performance, a
separate phase may be used to move the data.
The Z level logically stores ensembles in Z level variables, composes them into phases and stores their
results. The phase invocation interface between the Z and X levels encourages modularity because the same
level code can be invoked with different ensemble parameters in the same way that procedures are reused
in sequential languages.
The ensemble abstraction helps hide the diversity of parallel architectures. However, to map well to
individual architectures the abstraction must be parameterized, for example, by the number of processors
and the size of the problem. This parameterization is illustrated in the next section.
3 Ensemble Example: Jacobi
To provide a better understanding of the ensembles and the Phase Abstractions, we now complete the
description of the Jacobi program. We adopt notation from the proposed Orca C language [30, 32], but
other languages based on the Phase Abstractions are possible (see Section 4).
3.1
#define Rows 1 /* Constants to define the shape */
#define Cols 2 /* of the logical processor array */
#define TwoD 3
program Jacobian (shape, Processors)
switch (shape)f /* Configuration Computation */
case Rows: rows = Processors;
break;
case Cols: rows
break;
case TwoD: Partition2D(&rows, &cols, Processors);
break;
(rows, cols, Processors) /* Configuration Parameter List */
!data ensemble definitions?; /* Y Level */
!port ensemble definitions?;
!code ensemble definitions?;
!process definitions?; /* X Level */
begin /* Z Level */
while (tolerance ? delta)
f
and newP to prepare for */
the next iteration */
Figure
5: Overall Phase Abstraction Program Structure
As shown in Figure 5, a Phase Abstractions program consists of X, Y, and Z descriptions, plus a list of
configuration parameters that are used by the program to adapt to different execution environments. In this
case, two runtime parameters are accepted: Processors and shape. The first parameter is the number of
processors, while the second specifies the shape of the processor array. As will be discussed later, the program
uses a 2D data decomposition, so by setting shape to Rows (Cols) we choose a horizontal strips (vertical
strips) decomposition. (The function Partition2D() computes values of rows and cols such that (rows
Processors and the difference between rows and cols is minimized.) With this configuration
computation this program can, through the use of different load time parameters, adapt to different numbers
of processors and assume three different data decompositions. The configuration computation is executed
once at load time.
3.2 Z Level of Jacobi
After the program is configured, the Z level program is executed, which initializes program variables, reads
the input data, and then iteratively invokes the Jacobi and Max phases until convergence is reached, at
which point an output phase is invoked. The data, processing, and communication components of the Jacobi
and Max phases are specified by defining and composing code, data and port ensembles as described below.
3.3 Y Level: Data Ensembles
This program uses two arrays to store floating point values at each point of a 2D grid. Parallelism is achieved
by partitioning these arrays into contiguous 2D blocks:
partition block[r][c] float p[rows][cols],
float newP[rows][cols];
This declaration states that p and newP have dimensions (rows * cols) and are partitioned onto an (r * c)
section array (process array). The keyword partition identifies p and newP as ensemble arrays, and block
names this partitioning so that it can be reused to define other ensembles. This partitioning corresponds to
the one in Figure 2 when rows=6, cols=6, 2, and this ensemble declaration belongs in the
!data ensembles? meta-code of Figure 5. (Section 5 shows how an alternate decomposition is declared.)
The values of r and c are assumed to be specified in the program's configuration parameter list. Each
section is implicitly defined to be of size (s *
r
c
. (If r does not divide
rows evenly, some sections will have
r
e while others will have
r
Consequently, X level
processes contain no assumptions about the data decomposition except the dimension of the subarrays, so
these processes can scale in both the number of logical processors and in the problem size.
3.4 Jacobi Phase
Ensemble. The Jacobi phase computes for each point the average of its four nearest neighbors,
implying that each section will communicate with its four nearest neighbor sections (See Figure 4). The
following Y level ensemble declaration defines the appropriate port ensemble:
Jacobi.portnames !-? N, E, W, S /* North, East, West, South */
The first line declares the phase's port names so the following bindings can be specified. The second and
third lines define a mesh connectivity between Y level port names. This port ensemble declaration does not
specify connections for the ports that lie on the boundaries. In this case these unbound ports are bound
to derivative functions, which compute boundary conditions using data local to the section. The following
binds derivative functions to ports on the edges of Jacobi.
Jacobi[0][i] .port.N receive !-? RowZero, where 0
receive !-? ColZero, where 0 !=
Jacobi[i][0] .port.W receive !-? ColZero, where 0 !=
Jacobi[r-1][i] .port.S receive !-? RowZero, where 0
RowZero and ColZero are defined as:
double RowZero()
static double row[1:t] /* default initialized to 0's */
return row;
double ColZero()
static double col[0][1:s] /* default initialized to 0's */
return col[0];
The values of s and t are determined by the process' X level function-in this case xJacobi().
In the absence of derivative functions, X level programs could check for the existence of neighbors, but
such tests complicate the source code and increases the chance of introducing errors. As Section 5 shows,
even modestly complicated boundary conditions can lead to a proliferation of special case code.
Code Ensemble. To define the code ensemble for Jacobi, each of the r * c sections is assigned an instance
of the xJacobi() code:
Jacobi[i][j].code !-? xJacobi(); where 0
Because Jacobi contains heterogeneity only on the boundaries, which in this program is handled by derivative
functions, all the functions are the same. In general, however, the only restriction is that the function's
argument types and return type must match those of the phase invocation.
Level. The X level code for Jacobi is shown in Figure 6. It first sends edge values to its four neighbors,
it then receives boundary values from its neighbors, and finally it uses the four point stencil to compute the
average for each interior point. Several features of the X level code are noteworthy:
ffl parameters-The arguments to the X level code establish a correspondence between local variables and
the sections of the ensembles. In this case, the local value array is bound to a block of ensemble values.
ffl communication-Communication is specified using the transmit operator (!==), for which a port name
on the left specifies a send of the righthand side, and a port on the right indicates a receive into the
variable on the lefthand side. The semantics are that receive operations block, but sends do not.
ffl uniformity-Because derivative functions are used, the xJacobi() function contains no tests for boundary
conditions when sending or receiving neighbor values.
ffl border values-The values s and t, used to define the bounds of the value array, are parameters derived
from the section size of the data ensemble. To hold data from neighboring sections, value is declared to
be one element wider on each side than the incoming array argument. This extra storage is explicitly
specified by the difference between the local declaration, x[0:s+1][0:t+1], and the formal declaration,
x[1:s][1:t], where the upper bounds of these array declarations are inclusive.
ffl array slices-Slices provide a concise way to refer to an entire row (or in general, a d-dimensional
block) of data. When slices are used in conjunction with the transmit operator (!==), the entire block
is sent as a single message, thus reducing communication overhead.
The Complete Phase. To summarize, the data ensemble, the port ensemble, and the code ensemble
collectively define the Jacobi phase. Upon execution the sections declared by the configuration parameters
are logically connected in a nearest-neighbor mesh, and each section of data is manipulated by one xJacobi()
process. The end result is a parallel algorithm that computes one Jacobi iteration.
3.5 Max Phase
The Max phase finds the maximum change of all grid points, and uses the same data ensemble as the Jacobi
phase. The port ensemble is shown graphically in Figure 8 and is defined below.
Max.portnames !-? P, L, R /* Parent, Left, Right */
xJacobi(value[1:s][1:t], new-value[1:s][1:t])
double value[0:s+1][0:t+1]; /* extra storage on all four sides */
double new-value[0:s+1][0:t+1];
port North, East, West, South;
double new-value[0:s+1][0:t+1];
int i, j;
/* Send neighbor values */
North !== value [1][1:t]; /* 1:t is an array slice */
East !== value[1:s][t];
West !== value[1:s][1];
South !== value[s][1:t];
Receive neighbor values */
for (i=1; i!=s; i++)
for (j=1; i!=t; i++)
for (i=1; i!=s; i++)
for (j=1; i!=t; i++)
Figure
Level Code for the Jacobi Phase
The derivative functions for this phase are bound so that a receive from a leaf section's Left or Right port
will return the value computed by the Smallest Value() function, and a send from the root's unbound
Parent port will be a no-op.
Max[i].port.L receive !-? Smallest-Value() where r*c/2 Processors
Max[i].port.R receive !-? Smallest-Value() where r*c/2 Processors
Max[i].port.P send !-? No-Op() where
The Smallest Value() derivative function simply returns the smallest value that can be represented on the
architecture. The code ensemble for this phase is similar to the Jacobi phase, except that xMax() replaces
xJacobi(). (See Figure 7.)
xMax(value[1:s][1:t], new-value[1:s][1:t])
double value[1:s][1:t];
double new-value[1:s][1:t];
port Parent, Left, Right;
int i, j;
double local-max;
double temp;
/* Compute the local maximum */
for (i=1; i!=s; i++)
for (j=1; j!=t; j++)
/* Compute the global maximum */
temp !== Left; /* receive */
temp !== Right; /* receive */
Parent !== local-max; /* send */
Figure
7: X Level Code for the Max Phase
With applications that are more complicated than Jacobi, the benefit of using ensembles increases while
their cost is amortized over a larger program. The cost of using ensembles will also decrease as libraries of
Figure
8: Illustration of a Tree Port Ensemble
ensembles, phases, derivative functions and X level codes are built. For example, the Max phase of Jacobi
is common to many computations and would not normally be defined by the programmer.
4 High Level Programming with the Phase Abstractions
Phase Abstractions are not a programming language, but rather a foundation for the development of parallel
programming languages that support the creation of efficient, scalable, portable programs. Orca C, used in
the previous section, is a literal, textual instantiation of the Phase Abstractions. It clearly shows the power
of the Phase Abstractions, but some may find it too low-level and tedious.
In fact, a departure from the literal Orca C language is not required to achieve an elegant programming
style. By adopting certain conventions, it is possible to build reusable abstractions directly on top of Orca
C. By staying within the Orca C framework, this solution has the advantage that different sublanguages can
be used together for a single large problem that requires diverse abstractions for good performance. As an
example, consider the design of an APL-like array sublanguage for Orca C. 3
Recall that an X level procedure receives two kinds of parameters-global data passed as arguments and
port connections-that support two basic activities: computations on data and communication. However,
it is possible to constrain X level functions to perform just one of these two tasks-a local computation or a
communication operation. That is, there could be separate computation phases and communication phases.
For example, there can be X level computation functions for adding integers, computing the minimum of some
values, or sorting some elements. There can be X level communication functions for shifting data cyclically
in a ring, for broadcasting data, or for communicating up and down a tree structure. Reductions, which
naturally combine both communication and computation, are notable exceptions where the separation of
3 Since the submission of this paper, an array sublanguage known as ZPL has been developed to support data
parallel computations [35, 47, 31, 37]. While its syntax differs significantly from Orca C, ZPL remains true to the
Phase Abstractions model. It provides a powerful Z level language that hides all of the X and Y level details from
the user.
communication from computation is not desirable. For such operations it suffices to define a communication-oriented
phase that takes an additional function parameter for combining the results of communications.
To illustrate, reconsider the Jacobi example. Rather than specify the entire Jacobi iteration in one X
level process, each communication operation constitutes a separate phase and the results are combined by
Z level add and divide phases. The convergence test is computed at the Z level by subtracting the old
array from the new one and performing a maximum reduction on the differences. The program skeleton in
Figure
9 illustrates this method, providing examples of X level functions for (referred to as operator+
in the syntactic style of C++), shift, and reduce; the Z level code shows how data ensembles are declared
and how phase structures for add, left-shift and reduce are initialized. The divide and subtract phases are
analogous to add, and the other shift functions are analogous to the left-shift.
There are three consequences of this approach. First, the interface to a phase is substantially simplified.
Second, some problems are harder to describe because it is not possible to combine computation and communication
within a single X level function. Finally, X level functions (and the phases that they comprise)
are smaller and are more likely to perform just one task, increasing their composability and reusability.
Although the array sublanguage defined here is similar to APL, it has some salient differences. Most
significantly, the Orca C functions operate on subarrays, rather than individual elements, which means
that fast sequential algorithms can be applied to subarrays. So while this solution achieves some of the
conciseness and reusability of APL, it does not sacrifice control over data decompositions or lose the ability
to use separate global and local algorithms. This solution also has the advantage of embedding an array
language in Orca C, allowing other programming styles to be used as they are needed.
The power of the Phase Abstractions comes from the decomposition of parallel programs into X, Y and Z
levels, the encoding of key architectural properties as simple parameters, and the concept of ensembles, which
allows data, port and code decompositions to be specified and reused as individual components. The three
types of ensembles work together to allow the problem and machine size to be scaled. In addition, derivative
functions allow a single X level program to be used for multiple processes even in the presence of boundary
conditions. This section discusses the Phase Abstractions with respect to performance and expressiveness.
Portability and Scalability. When programs are moved from one platform to another they must adapt
to the characteristics of their host machine if they are to obtain good performance. If such adaptation is
automatic or requires only minor effort, portability is achieved. The Phase Abstractions support portability
and scalability by encoding key architectural characteristics as ensemble parameters and by separating phase
definitions into several independent components.
xproc TYPE[1:s][1:t] operator+(TYPE x[1:s][1:t], TYPE y[1:s][1:t])
TYPE result[1:s][1:t];
int i, j;
for (i=1; i!=s; i++)
for (j=1; j!=t; i++)
return result;
xproc void shift(TYPE val[1:s][1:t])
port write-neighbor,
int
for (i=2; i!=t; i++)
xproc int reduce(TYPE val[1:k], TYPE*()op)
port Parent,
int
for (i=2; i!=k; i++)
for (i=1; i!=n; i++)
Parent !== accum;
begin Z
double X[1:J][1:K], OldX[1:J][1:K];
phase operator+;
phase Left;
phase Reduce;
do
Figure
9: Jacobi Written in an Array Style Using Orca C
Changes to either the problem size or the number of processors are encapsulated in the data ensemble
declaration. As in Section 3, we relate the size of a section (s * t), the overall problem size (rows * cols),
and the number of sections (r * c) as follows:
The problem size scales by changing the values of rows and cols, the machine size scales by changing the
values of r and c, and the granularity of parallelism is controlled by altering either the number of processors
or the number of sections in the ensemble declaration. This flexibility is an important aspect of portability
because different architectures favor different granularities.
While it is desirable to write programs without making assumptions about the underlying machine,
knowledge of machine details can often be used to optimize program performance. Therefore, tuning may
sometimes be necessary. For example, it may be beneficial for the logical communication graph to match
the machine's communication structure. Consider embedding the binary tree of the Max phase onto a mesh
architecture: Some logical edges must span multiple physical links. This edge dilation can be eliminated
with a connectivity that allows comparisons along each row of processors and then along a single column
(see
Figure
10).
Figure
10: Rows and Columns to Compute the Global Maximum
To address the edge dilation problem the fixed binary tree presented in Section 3 can be replaced by a
new port ensemble that uses a tree of variable degree. Such a solution is shown in Figure 11, where the child
ports are represented by an array of ports. This new program can use either a binary tree or the "rows and
columns" approach. The port ensemble declaration for the latter approach is shown below.
/* Rows and Columns communication structure */
With the code suitably parameterized, this program can now execute efficiently on a variety of architectures
by selecting the proper port ensemble.
xMax(value[1:s][1:t], new[1:s][1:t], numChildren)
double value[1:s][1:t];
double new-value[1:s][1:t];
port Parent, Child[numChildren];
int i, j;
double local-max;
double temp;
/* Compute the local maximum */
for (i=1; i!=s; i++)
for (j=1; i!=t; i++)
/* Compute the global maximum */
for (i=0; i!numChildren; i++)
temp !== Child[i]; /* receive */
Parent !== local-max; /* send */
Figure
Parameterized X Level Code for the Max Phase
Locality. The best data partitioning depends on factors such as the problem and machine size, the hard-
ware's communication and computation characteristics, and the application's communication patterns. In
the Phase Abstractions model, changes to the data partitioning are encapsulated by data ensembles. For
example, to define a 2D block partitioning on P processors, the configuration code can define the number
of sections to be
P: If a 1D strip partitioning is desired, the number of sections can simply
be defined to be P. This strip decomposition requires that each process have only East-West
neighbors instead of the four neighbors used in the block decomposition. By using the port ensembles to
bind derivative functions to unused ports-in this case the North and South ports-the program can easily
accommodate this change in the number of neighbors. No other source level changes are required.
The explicit dichotomy between local and non-local access encourages the use of different algorithms
locally and globally. Batcher's sort, for example, benefits from this approach (see Section 1). This contrasts
with most approaches in which the programmer or compiler identifies as much fine-grained parallelism as
possible and the compiler aggregates this fine-grained parallelism to a granularity appropriate for the target
machine.
Boundary Conditions. Typically, processes on the edge of the problem space must be treated separately. 4
In the Jacobi Iteration, for example, a receive into the East port must be conditionally executed because
processes on the East edge have no eastern neighbors. Isolated occurrences of these conditionals pose little
problem, but in most realistic applications these lead to convoluted code. For example, SIMPLE can have up
to nine different cases-depending on which portions of the boundaries are contained within a process-and
these conditionals can lead to code that is dominated by the treatment of exceptional cases [18, 41].
For example, suppose a program with a block decomposition assumes in its conditional expression that
a process is either a NorthEast, East, or SouthEast section, as shown below:
if (NorthEast)
/* special case 1 */
else if (East)
/* special case 2 */
else if (SouthEast)
/* special case 3 */
A problem arises if the programmer then decides that a vertical strips decomposition would be more efficient.
4 Although we discuss this problem in the context of a message passing language, shared memory programs must
also deal with these special cases.
The above code assumes that exactly one of the three boundary conditions holds. But in the vertical strips
decomposition there is only one section on the Eastern edge, so all three conditions apply, not just one.
Therefore, the change in data decomposition forces the programmer to rewrite the above boundary condition
code.
Our model attempts to insulate the port and code ensembles from changes in the data decomposition:
Processes send and receive data through ports that in some cases involve interprocess communication and in
other cases invoke derivative functions. The handling of boundary conditions has thus been decoupled from
the X level source code. Instead of cluttering up the process code, special cases due to boundary conditions
are handled at the problem level where they naturally belong.
Reusability. The same characteristics that provide flexibility in the Phase Abstractions also encourage
reusability. For example, the Car-Parrinello molecular dynamics program [49] consists of several phases,
one of which is computed using the Modified Gram-Schmidt (MGS) method of solving QR factorization.
Empirical results have shown that the MGS method performs best with a 2D data decomposition [36].
However, other phases of the Car-Parrinello computation require a 1D decomposition, so in this case a 1D
decomposition for MGS yields the best performance since it avoids data movement between phases. This
illustrates that a reusable component is most effective if it is flexible enough to accommodate a variety of
execution environments.
Irregular Problems. Until now this paper has only described statically defined array-based ensembles.
However, this should not imply that Phase Abstractions are ill suited to dynamic or unstructured problems.
In fact, to some extent LPAR [28], a set of language extensions for irregular scientific computations (see
Section 7), can be described in terms of the Phase Abstractions. The key point is that an ensemble is a
set with a partitioning; to support dynamic or irregular computations we can envision dynamic or irregular
partitionings that are managed at runtime.
Consider first a statically defined irregular problem such as finite element analysis. The programmer
begins by defining a logical data ensemble that will be replaced by a physical ensemble at runtime. This
logical definition includes the proper record formats and an array of port names, but not the actual data
decomposition or the actual port ensemble. At runtime a phase is run that determines the partitioning
and creates the data and port ensembles: The size and contents of the data ensemble are defined, the
interconnection structure is determined, and the sections are mapped to physical processors. We assume
that the code ensemble is SPMD since this obviates the need to assign different codes to different processes
dynamically. Once this partitioning phase has completed the ensembles behave the same as statically defined
phases.
Dynamic computations could be generalized from the above idea. For example, a load balancing phase
could move data between sections and also create revised data and port ensembles to represent the new
partitioning. Technical difficulties remain before such dynamic ensembles can be supported, but the concepts
do not change.
Limits of the Non-Shared Memory Model. The non-shared memory model encourages good locality
of reference by exposing data movement to the programmer, but the performance advantage for this model is
small for applications that inherently have poor locality. For example, direct methods of performing sparse
factorization have poor locality of reference because of the sparse and irregular nature of the input
data. For certain solutions to this problem, a shared memory model performs better because the single
address space leads to better load balance through the use of a work queue model [38]. The shared memory
model also provides notational convenience, especially when pointer-based structures are involved.
6 Portability Results
Experimental evidence suggests that the Phase Abstractions can provide portability across a diverse set of
MIMD computers [32, 33]. This section summarizes these results for just one program, SIMPLE, but similar
results were achieved for QR factorization and matrix multiplication [30]. Here we briefly describe SIMPLE,
the machines on which this program was run, the manner in which this portable program was implemented,
and the significant results.
SIMPLE is a large computational fluid dynamics benchmark whose importance to high performance
computing comes from the substantial body of literature already devoted to its study. It was introduced in
1977 as a sequential benchmark to evaluate new computers and Fortran compilers [7]. Since its creation it
has been studied widely in both sequential and parallel forms [3, 9, 13, 16, 17, 23, 24, 40, 42].
Hardware. The portability of our parallel SIMPLE program was investigated on the iPSC/2 S, iPSC/2
F, nCUBE/7, Sequent Symmetry, BBN Butterfly GP1000, and a Transputer simulator. These machines
are summarized in Table 1. The two Intel machines differ in that the iPSC/2 S has a slower Intel 80387
floating point coprocessor, while the other has the faster iPSC SX floating point accelerator. The simulator
is a detailed Transputer-based non-shared memory machine. Using detailed information about arithmetic,
logical and communication operators of the T800 [24], this simulator executes a program written in a Phase
Abstraction language and estimates program execution time.
Implementation. The SIMPLE program was written in Orca C. Since no compiler exists for any language
based on the Phase Abstractions, this program was hand-compiled in a straight-forward fashion to C code
that uses a message passing substrate to support the Phase Abstractions. The resulting C code is machine-
Machine Sequent Intel Intel nCUBE BBN Transputer
model Symmetry A iPSC/2 S iPSC/2 F nCUBE/7 Butterfly GP1000 simulator
nodes 20
processors Intel 80386 Intel 80386 Intel 80386 custom Motorola 68020 T800
memory 32MB 4 MB/node 8 MB/node 512 KB/node 4 MB/node N/A
cache 64KB 64 KB 64KB none none
network bus hypercube hypercube hypercube omega mesh
Table
1: Machine Characteristics
independent except for process creation, which is dependent on each operating system's method of spawning
processes.
Number of
1680 points on a Transputer
1680 points on the Intel iPSC/2
1680 points on the Butterfly
1680 points on the NCUBE/7
1680 points on the Symmetry
Number of 28
Pingali&Rogers
Lin&Snyder
Hiromoto et al.
Figure
12: (a) SIMPLE Speedup on Various Machines (b) SIMPLE with 4096 points
Figure
12(a) shows that similar speedups were achieved on all machines. Many hardware characteristics
can affect speedup, and these can explain the differences among the curves. In this discussion we concentrate
on communication costs relative to computational speed, the feature that best distinguishes these machines.
For example, the iPSC/2 F and nCUBE/7 have identical interconnection topologies but the ratio of computation
speed to communication speed is greater on the iPSC/2 [11, 12]. This has the effect of reducing
speedup because it decreases the percentage of time spent computing and increases the fraction of time spent
on non-computation overhead. Similarly, since message passing latency is lowest on the Sequent's shared
bus, the Sequent shows the best speedup. This claim assumes little or no bus contention, which is a valid
assumption considering the modest bandwidth required by SIMPLE.
Figure
12(b) shows the SIMPLE results of Hiromoto et al. on a Denelcor HEP using 4096 data points [23],
which indicate that our portable program is roughly competitive with machine-specific code. The many
differences with our results-including different problem sizes, different architectures, and possibly even
different problem specifications-make it difficult to draw any stronger conclusions.
As another reference point, Figure 12(b) compares our results on the iPSC/2 S against those of Pingali
and Rogers' parallelizing compiler for language [42]. Both experiments were run on
iPSC/2's with 4MB of memory and 80387 floating point units. All other parameters appear to be identical.
The largest potential difference lies in the performance of the sequential programs on which speedups are
computed. Although these results are encouraging for proponents of functional languages, we point out
that our results do not make use of a sophisticated compiler: The type of compiler technology developed by
Pingali and Rogers can likely improve the performance of our programs as well.
Even though the machines differ substantially-for example, in memory structure-the speedups fall
roughly within the same range. Moreover, this version of SIMPLE compares favorably with machine-specific
implementations. These results suggest, then, that portability has been achieved for this application running
on these machines.
7 Related Work
Many systems support a global view of parallel computation, SPMD execution, and data decompositions
that are similar to various aspects of the Phase Abstractions. None, however, provide support for an X-
level algorithm that is different from the Z-level parallel algorithm. Nor do any provide general support for
handling boundary conditions or controlling granularity. This section discusses how some of these systems
address scalability and portability in the aggregate data parallel programming style.
Dataparallel C. Dataparallel C [21] (DPC) is a portable shared-memory SIMD-style language that has
similarities to C++. Unlike the Phase Abstractions, DPC supports only point-wise parallelism. DPC has
point-wise processor (poly) variables that are distributed across the processors of the machine. Unlike its
predecessor C* [43], DPC supports data decompositions of its data to improve performance on coarse-grained
architectures. However, because DPC only supports point-wise communication, the compiler or
runtime system must detect when several point sends on a processor are destined for the same processor
and bundle them. Also, to maintain performance of the SIMD model on a MIMD machine, extra compiler
analysis is required to detect when the per-instruction SIMD synchronizations are not necessary and can
be removed. Because each point-wise process is identical, edge effects must be coded as conditionals that
determine which processes are on the edge of the computation. It is hard to reuse such code because the
boundary conditions may change from problem to problem. Constant and variable boundary conditions can
be supported by expanding the data space and leaving some processes idle.
Dino. Dino [44] is a C-like, SPMD language. Like C*, it constructs distributed data structures by replicating
structures over processors and executing a single procedure over each element of the data set. Dino
provides a shared address space, but remote communication is specified by annotating accesses to non-local
objects by the # symbol, and the default semantics are true message-passing. Parallel invocations of a procedure
synchronize on exit of the procedure. Dino allows the mapping of data to processes to be specified
by programmer-defined functions. To ensure fast reads to shared data, a partitioning can map an individual
variable to multiple processors. Writes to such variables are broadcast to all copies. Dino handles edge
effects in the same fashion as C*. Because Dino only supports point-wise communication, the compiler or
runtime system must combine messages.
Mehrotra and Rosendale. A system described by Mehrotra and Rosendale [39] is much like Dino in
that it supports a small set of data distributions. However, this system provides no way to control or
precisely determine which points are local to each other, so it is not possible to control communication costs
or algorithm choice based on locality. On the other hand, this system does not require explicit marking
of external memory references as in Dino. Instead, their system infers, when possible, which references are
global and which are not. In algorithms where processes dynamically choose their "neighbors," this simplifies
programming. Also, programs are more portable than those written in Dino. The communication structure
of the processor is not visible to the programmer, but the programmer can change the partitioning clauses
on the data aggregates. SPMD processing is allowed, but there are no special facilities for handling edge
effects.
Fortran Dialects. Recent languages such as Kali [26], Vienna Fortran [6], and HPF [22] focus on data
decomposition as the expression of parallelism. Their data decompositions are similar to the Phase Abstractions
notion of data ensembles, but the overall approach is fundamentally different. Phase Abstractions
require more effort from the programmer, while this other approach relies on compiler technology to exploit
loop level parallelism. This compiler-based approach can guarantee deterministic sequential semantics, but
it has less potential for parallelism since there may be cases where compilers cannot transform a sequential
algorithm into an optimal parallel one.
Kali, Vienna Fortran and HPF depart from sequential languages primarily in their support for data
though some of these languages do provide mechanisms for specifying parallel loops. Vienna
Fortran provides no form of parallel loops, while the FORALL statement in HPF and Kali specifies that a
loop has no loop carried dependencies. To ensure deterministic semantics of updates to common variables
by different loop iterations, values are deterministically merged at the end of the loop. This construct is
optional in HPF; the compiler may attempt to extract parallelism even where a FORALL is not used.
HPF and Vienna Fortran allow arrays to be aligned with respect to an abstract partitioning. These
are very powerful constructs. For example, arrays can be dynamically remapped, and procedures can define
their own data distribution. Together these features are potentially very expensive because although the
programmer helps in specifying the data distribution at various points of the program, the compiler must
determine how to move the data. In addition to data distribution directives, Kali allows the programmer to
control the assignment of loop iterations to processors through the use of the On clause, which can help in
maintaining locality.
LPAR. LPAR is a portable language extension that supports structured, irregular scientific parallel computations
[28, 27]. In particular, LPAR provides mechanisms for describing non-rectangular distributed
partitions of the data space to manage load-balancing and locality. These partitions are created through
the union, intersection and set difference of arrays. Because support for irregular decompositions has a high
cost, LPAR syntactically distinguishes irregular decompositions so that faster runtime support can be used
for regular decompositions. 5 Computations are invoked on a group of arrays by the foreach operator, which
executes its body in parallel on each array to yield coarse-grained parallelism. LPAR uses the overlapping
indices of distributed subarrays to support sharing of data elements. Overlapping domains provide an elegant
way of describing multilevel mesh algorithms and computations for boundary conditions. There is an
operator for redistributing data elements, but LPAR depends on a routine written in the base language to
compute what the new decomposition should be.
The Phase Abstraction's potential to support dynamic, irregular decompositions is discussed in Section 5.
For multigrid decompositions, a sublanguage supporting scaled partitionings and communication between
scaled ensembles would be useful. The Phase Abstractions' support for loose synchrony naturally supports
the use of refined grids in conjunction with the base grid.
Split-C. Split-C is a shared-memory SPMD language with memory reference operations that support
latency-hiding [10]. Split-C procedures are concurrently applied in an "owner-computes" fashion to the
partitions of an aggregate data structure such as an array or pointer-based graph. A process reads data that
it does not own with a global pointer (a Split-C data type). To hide latency, Split-C supports an asynchronous
read-akin to an unsafe Multilisp future [20]-that initiates a read of a global pointer but does not wait
for the data to arrive. A process can invoke the sync() operation to block until all outstanding reads
5 Scott Baden, Personal Communication.
complete. There is a similar operation for global writes. These operations hide latency while providing a
global namespace and reducing the copying of data in and out of message queues. (Copying may be necessary
for bulk communication of non-contiguous data, such as the column of an array.) However, these operations
can lead to complex programming errors because a misplaced reference or synchronization operation can
lead to incorrect output but no immediate failure.
Array distribution in Split-C is straightforward but somewhat limited; some number of higher order
dimensions can be cyclically distributed while the remaining dimensions are distributed as blocks. Load
balance, locality, and irregular decompositions may be difficult to achieve for some applications. Array
distribution declarations are tied to a procedure's array parameter declarations, which can limit reusability
and portability because these declarations and the code that depends on them must be modified when the
distribution changes. This coupling can also incur a performance penalty because the benefit of an optimal
array distribution for one procedure invocation may be offset by the cost of redistributing the array for other
calculations that use the array. Split-C provides no special support for boundary conditions. The usual trick
of creating an enlarged array is possible; otherwise, irregularities must be handled by conditional code in the
body of the SPMD procedures.
8 Conclusion
Parallelism offers the promise of great performance but thus far has been hampered by a lack of portability,
scalability, and programming convenience that unacceptably increase the time and cost of developing efficient
programs. Support is required for quickly programminga solution and easily moving it to new machines as old
ones become obsolete. Rather than defining a new parallel programming paradigm, the Phase Abstractions
model supports well-known techniques for achieving high-performance-computing sequentially on local
aggregates of data elements and communicating large groups of data as a unit-by allowing the programmer
to partition data across parallel machines in a scalable manner. Furthermore, by separating a program into
reusable parts-X level, Y level, Z-level, ensemble declarations, and boundary conditions-the creation of
subsequent programs can be significantly simplified. This approach provides machine-independent, low-level
control of parallelism and allows programmers to write in an SPMD manner without sacrificing the efficiency
of MIMD processing.
Message passing has often been praised for its efficiency but condemned as being difficult to use. The
contribution of the Phase Abstractions is a language model that focuses on efficiency while reducing the
difficulty of non-shared memory programming. The programmability of this model is exemplified by the
straight-forward solution of problems such as SIMPLE, as well as the ability to define specialized high-level
array sublanguages. Because the Phase Abstractions model is designed to be structurally similar to MIMD
architectures, it performs very well on a variety of MIMD processors. This claim is supported by tests on
machines such as the Intel iPSC, the Sequent Symmetry and the BBN Butterfly.
--R
A flexible communication abstraction for non-shared memory parallel computing
Program structuring for effective parallel portability.
A simulator for MIMD performance prediction - application to the S-1 MkIIa multiprocessor
NESL: A nested data-parallel language
Linda in context.
Vienna Fortran - a Fortran language extension for distributed memory multiprocessors
The Simple code.
LogP: Towards a realistic model of parallel computation.
Parallel programming in Split-C
Hypercube performance.
Performance of the Intel iPSC/860 and NCUBE 6400 hypercubes.
Supporting machine independent programmingon diverse parallel architectures.
A report on the Sisal language project.
SIMPLE on the CHiP.
Restructuring Simple for the CHiP architecture.
Simple: An exercise in programming in Poker.
Scalable abstractions for parallel programming.
A language for concurrent symbolic computation.
High Performance Fortran Forum.
Experiences with the Denelcor HEP.
Processor Element Architecture for Non-Shared Memory Parallel Computers
iPSC/2 User's Guide.
Compiling global name-space parallel loops for distributed execution
Lattice parallelism: A parallel programming model for non-uniform
An implementation of the LPAR parallel programming model for scientific computations.
The Portability of Parallel Programs Across MIMD Computers.
ZPL language reference manual.
A portable implementation of SIMPLE.
Portable parallel programming: Cross machine comparisons for SIMPLE.
Data ensembles in Orca C.
ZPL: An array sublanguage.
Accommodating polymorphic data decompositions in explicitly parallel programs.
SIMPLE performance results in ZPL.
Towards a machine-independent solution of sparse cholesky factorization
Compiling high level constructs to distributed memory architectures.
Analysis of the SIMPLE code for dataflow computation.
Experiences with Poker.
Compiler parallelization of SIMPLE for a distributed memory machine.
The Dino parallel programming language.
Type architecture
The XYZ abstraction levels of Poker-like languages
A ZPL programming guide.
A bridging model for parallel computation.
A parallel implementation of the Car-Parrinello method
--TR
--CTR
Marios D. Dikaiakos , Daphne Manoussaki , Calvin Lin , Diana E. Woodward, The portable parallel implementation of two novel mathematical biology algorithms in ZPL, Proceedings of the 9th international conference on Supercomputing, p.365-374, July 03-07, 1995, Barcelona, Spain
Bradford L. Chamberlain , Sung-Eun Choi , E. Christopher Lewis , Calvin Lin , Lawrence Snyder , W. Derrick Weathersby, ZPL: A Machine Independent Programming Language for Parallel Computers, IEEE Transactions on Software Engineering, v.26 n.3, p.197-211, March 2000 | portable;programming model;scalable;parallel;MIMD |
273411 | Automatic Determination of an Initial Trust Region in Nonlinear Programming. | This paper presents a simple but efficient way to find a good initial trust region radius (ITRR) in trust region methods for nonlinear optimization. The method consists of monitoring the agreement between the model and the objective function along the steepest descent direction, computed at the starting point. Further improvements for the starting point are also derived from the information gleaned during the initializing phase. Numerical results on a large set of problems show the impact the initial trust region radius may have on trust region methods behavior and the usefulness of the proposed strategy. | Introduction
. Trust region methods for unconstrained optimization were
first introduced by Powell in [14]. Since then, these methods have enjoyed a good
reputation on the basis of their remarkable numerical reliability in conjunction with
a sound and complete convergence theory. They have been intensively studied and
applied to unconstrained problems (see for instance [11], [14], and [15]), and also to
problems including bound constraints (see [4], [7], [12]), convex constraints (see [2],
[6], [18]), and non-convex ones (see [3], [5], and [19], for instance).
At each iteration of a trust region method, the nonlinear objective function is
replaced by a simple model centered on the current iterate. This model is built using
first and possibly second order information available at this iterate and is therefore
usually suitable only in a certain limited region surrounding this point. A trust region
is thus defined where the model is supposed to agree adequately with the true objective
function. Trust region approaches then consist of solving a sequence of subproblems
in which the model is approximately minimized within the trust region, yielding a
candidate for the next iterate. When a candidate is determined that guarantees
a sufficient decrease on the model inside the trust region, the objective function is
then evaluated at this candidate. If the objective value has decreased sufficiently,
the candidate is accepted as next iterate and the trust region is possibly enlarged.
Otherwise, the new point is rejected and the trust region is reduced. The updating
of the trust region is directly dependant on a certain measure of agreement between
the model and the objective function.
A good choice for the trust region radius as the algorithm proceeds is crucial.
Indeed, if the trust region is too large compared with the agreement between the
model and the objective function, the approximate minimizer of the model is likely
to be a poor indicator of an improved iterate for the true objective function. On
the other hand, too small a trust region may lead to very slow improvement in the
estimate of the solution.
When implementing a trust region method, the question then arises of an appropriate
choice for the initial trust region radius (ITRR). This should clearly reflect
the region around the starting point, in which the model and objective function approximately
agree. However, all the algorithms the author is aware of use a rather
ad hoc value for this ITRR. In many algorithms, users are expected to provide their
Department of Mathematics, Facult'es Universitaires N. D. de la Paix, 61 rue de Bruxelles, B-
5000, Namur, Belgium (as@math.fundp.ac.be). This work was supported by the Belgian National
Fund for Scientific Research.
own choice based on their knowledge of the problem (see [8], and [9]). In other cases,
the algorithm initializes the trust region radius to the distance to the Cauchy point
(see [13]), or to a multiple or a fraction of the gradient norm at the starting point
(see [8], and [9]). In each of these cases, the ITRR may not be adequate, and, even
if the updating strategies used thereafter generally allow to recover in practice from
a bad initial choice, there is usually some undesired additional cost in the number of
iterations performed. Therefore, the ITRR selection may be considered as important,
especially when the linear-algebra required per iteration is costly.
In this paper, we propose a simple but efficient way of determining the ITRR,
which consists of monitoring the agreement between the model and the objective
function along the steepest descent direction computed at the starting point. Further
improvements for the starting point will also be derived from the information gleaned
during this initializing phase. Numerical experiments, using a modified version of the
nonlinear optimization package LANCELOT (see [8]), on a set of relatively large test
examples from the CUTE test suite (see [1]), show the merits of the proposed strategy.
Section 2 of the paper develops the proposed automatic determination of a suitable
ITRR. The detailed algorithm is described in x3. Computational results are presented
and discussed in x4. We finally conclude in x5.
2. Automatic determination of an initial trust region.
2.1. Classical trust region update. We consider the solution of the unconstrained
minimization problem
The function f is assumed to be twice-continuously differentiable and a trust region
method is used, whose iterations are indexed by k, to solve this problem.
At iteration k, the quadratic model of f(x) around the current iterate x (k) is
denoted by
is a symmetric approximation of the Hessian
(Subsequently, we will use the notation f (k) and g (k) for f(x (k) )
and g(x (k) ), respectively.) The trust region is defined as the region where
Here \Delta (k) denotes the trust region radius and k \Delta k is a given norm.
When a candidate for the next iterate, x say, is computed that approximately
minimizes (2.2) subject to the constraint (2.3), a classical framework for the
trust region radius update is to set
for some selected fi (k) satisfying
In (2.5), the
quantity
represents the ratio of the achieved to the predicted reduction of the objective function.
The reader is referred to [8], [9], and [10] for instances of trust region updates using
(2.4)-(2.5).
2.2. Initial trust region determination. The problem in determining an ITRR
\Delta (0) is to find a cheap way to test agreement between the model (2.2) and the objective
function at the starting point, x (0) . The method presented here is based on the
use of information generally available at this point, namely the function value and
the gradient. With the extra cost of some function evaluations, a reliable ITRR will
be determined, whose use will hopefully reduce the number of iterations required to
find the solution. As shown in x4, the possible saving produced in most cases largely
warrants the extra cost needed to improve the ITRR.
The basic idea is to determine a maximal radius that guarantees a sufficient
agreement between the model and the objective function in the direction \Gammag (0) , using
an iterative search along this direction. At each iteration i of the search, given a
radius estimate \Delta (0)
i , the model and the objective function values are computed at
the point x
Writing
the ratio
ae (0)
is also calculated, and the algorithm then stores the maximal value among the estimates
whose associated ae (0)
' is "close enough to one" (following
some given criterion). It finally updates the current estimate \Delta (0)
The updating phase for \Delta (0)
i follows the framework presented in (2.4)-(2.5), but
includes a more general test on ae (0)
i because the predicted change in (2.8) (unlike that
in (2.6)) is not guaranteed to be positive. That is, we set \Delta (0)
for some Note that update (2.9) only takes the adequacy between
the objective function and its model into consideration, without taking care of the
minimization of the objective function f . That is, it may happen that the radius
estimate is decreased (fi (0)
i is not close enough to one (jae (0)
even though a big reduction is made in the objective function (if f
On the other hand, the radius estimate could
be augmented (fi (0)
i is close enough to one (jae (0)
when actually the objective function has increased (if f
for instance). This is not contradictory, as far as we forget temporarily about the
minimization of f and concentrate exclusively on the adequacy between the objective
function and its model to find a good ITRR. In the next section, we shall consider an
extra feature that will take account of a possible decrease in f during the process.
In order to select a suitable value for fi (0)
satisfying (2.9), a careful strategy
detailed below is applied, which takes advantage of the current available information.
This strategy uses quadratic interpolation (as already done in some existing framework
for trust region updates, see [9]), and has been inspired by the trust region updating
rules developed in [8].
The univariate function f(x
first modeled by the quadratic
i (fi) that fits f (0) , f (0)
, and the directional derivative \Gamma\Delta (0)
where d (0)
. Assuming that this quadratic does not coincide with the
univariate quadratic m (0)
used to provide candidates for fi (0)
which the ratio ae (0)
i would be close to one (slightly smaller and slightly larger than
one, respectively) if f were the quadratic q (0)
(fi). That is, equations
are solved (where ' ? 0 is a small positive constant), yielding candidates
and
respectively. These two candidates will be considered as possible choices for a suitable
satisfying (2.9), provided a careful analysis based on two principles is first
performed.
The first principle is to select and exploit, as much as possible, the relevant
information that may be drawn from fi (0)
i;1 and/or fi (0)
i;2 . For instance, if fi (0)
i;1 is greater
than one and the radius estimate must be decreased (because jae (0)
it should be ignored. The second principle consists in favouring the maximal value
for fi (0)
among the relevant ones. Based on the observation that the linear-algebraic
costs during a trust region iteration are generally less when the trust region has been
contracted (because part of the computation may be reused after a contraction but
not after an expansion), this corresponds to favour an ITRR choice on the large rather
than small side.
As in (2.9), we distinguish three mutually exclusive cases. The first case, for
which fi (0)
occurs when jae (0)
possibilities are
considered in this first case, that produce choice (2.14).
ffl Both fi (0)
i;1 and fi (0)
are irrelevant, that is, they recommend an increase of the
radius estimate while in this case, in reality it should be decreased. These
values are then ignored, and fi (0)
i is set to a fixed constant
ffl All the available relevant information provides a smaller value than the lower
bound fl 1 . Set fi (0)
ffl Either fi (0)
i;1 (or fi (0)
belongs to the appropriate interval, while fi (0)
i;2 (or fi (0)
respectively) is irrelevant or too small. The relevant one is selected.
ffl Both fi (0)
i;1 and fi (0)
i;2 are within the acceptable bounds. The maximum is then
chosen.
These possibilities yield:
if min(fi (0)
In the second case (i.e. when jae (0)
choice (2.15) is performed to
select a suitable fi (0)
based on the following reasoning.
ffl Both fi (0)
i;1 and fi (0)
are irrelevant because they recommend a decrease of the
radius estimate. fi (0)
i is set to a fixed constant
ffl At least one available piece of relevant information provides a larger value than
the upper bound fl 2 . Since any maximal pertinent information is favoured,
i is set to this bound.
ffl Either fi (0)
i;1 or fi (0)
i;2 belongs to the appropriate interval, while the other is
irrelevant. fi (0)
i is set to the relevant one.
ffl Both fi (0)
i;1 and fi (0)
i;2 are within the acceptable bounds. The maximum is then
selected.
This gives the following:
Finally, three situations are considered in the third case for selecting fi (0)
. Note that, since it is not clear from the value
of ae (0)
i that the radius estimate should be decreased or increased, fi (0)
i;1 and fi (0)
are
trusted and indicate if a decrease or an increase is to be performed.
ffl Both fi (0)
i;1 and fi (0)
i;2 advise a decrease of the radius estimate, but smaller than
the lower bound allowed. This lower bound, fl 3 , is then adopted.
ffl At least one among fi (0)
i;1 and fi (0)
i;2 recommends an increase of the radius es-
timate, but larger than the upper bound allowed, fl 4 . This upper bound is
used.
ffl The maximal value, max(fi (0)
belongs to the appropriate interval and
defines fi (0)
. The radius estimate is thus either increased or decreased, depending
on this value.
That is:
3. The algorithm. We are now in position to define our algorithm in full detail.
as used in (2.5), (2.9), (2.12) and (2.13), the ITRR
Algorithm depends on the constant - 0 ? 0. This one determines the lowest acceptable
level of agreement between the model and the objective function that must be reached
at a radius estimate to become a candidate for the ITRR.
The iterations of Algorithm ITRR will be denoted by the index i. While the
algorithm proceeds, \Delta max will record the current maximal radius estimate which
guarantees a sufficient agreement between the model and the objective function. Fi-
nally, the imposed limit on the number of iterations will be denoted by imax and
fixes the degree of refinement used to determine the ITRR.
ITRR Algorithm.
Step 0. Initialization. Let the starting point x (0) be given. Compute
and B (0) . Choose or compute an ITRR estimate \Delta (0)
0 and set
Step 1. Maximal radius estimate update. Compute
i as defined
in (2.7) and (2.8). If
set
Step 2. Radius estimate update. If i - imax, go to Step 3. Otherwise, compute
i;1 and fi (0)
using (2.12) and (2.13), respectively, compute
using
using
using (2.16) otherwise,
and set
Increment i by one and go to Step 1.
Step 3. Final radius update. If
Otherwise, set
Stop ITRR Algorithm.
The trust region algorithm may then begin, with \Delta (0) as ITRR.
We end this section by introducing an extra feature in the above scheme, which
takes advantage of the computations of f (0)
i , the function values at the trial points
(0) (see Step 1). That is, during the search of an improved radius estimate,
we simply monitor a possible decrease in the objective function at each trial point.
Doing so, at the end of Algorithm ITRR, rather than updating the final radius, we
move to the trial point that produced the best decrease in the objective function (if at
least one decrease has been observed). This point then becomes a new starting point,
at which Algorithm ITRR is repeated to compute a good ITRR. Of course, a limit is
needed on the number of times the starting point is allowed to change. Denoting by
this limit and by j the corresponding counter (initialized to one in Step 0), this
extra feature may be incorporated in Algorithm ITRR using two further instructions.
The first one, added at the end of Step 1, is
Here ffi denotes the current best decrease observed in the objective function and oe
stores the associated radius. (These two quantities should be initialized to zero in
Step 0). The second instruction, which comes at the beginning of Step 3, is
increment j by one and go to Step 0.
When starting a trust region algorithm with a rather crude approximation of the
solution, this kind of improvement, which exploits the steepest descent direction, may
be very useful. It is particularly beneficial when the cost of evaluating the function
is reasonable. A similar concept is used in truncated Newton methods (see [16], and
[17]).
Note that a change in the starting point requires the computation of a new gradient
and a new model, while the cost for determining the ITRR is estimated in
terms of function evaluations. Suitable choices for the limits imax and jmax and for
the constants used in Algorithm ITRR may depend on the problem type and will be
discussed in x4.
4. Numerical results. For a good understanding of the results, it is necessary
to give a rapid overview of the framework in which Algorithm ITRR has been embed-
ded, namely the large-scale nonlinear optimization package LANCELOT/SBMIN (see
[8]), designed for solving the bound-constrained minimization problem,
minimize x2R n f(x)
subject to the simple bound constraint
l - x - u;
where any of the bounds in (4.2) may be infinite.
SBMIN is an iterative trust region method whose version used for our testing has
the following characteristics:
ffl Exact first and second derivatives are used.
ffl The trust region is defined using the infinity norm in (2.3) for each k.
ffl The trust region update strategy follows the framework (2.4)-(2.5), and implements
a mechanism for contracting the trust region which is more sophisticated
than that for expanding it (see [8], p. 116).
ffl The solution of the trust region subproblem at each iteration is accomplished
in two stages. In the first, the exact Cauchy point is obtained to ensure a
sufficient decrease in the quadratic model. This point is defined as the first
local minimizer of m (k) (x (k) +d (k) (t)), the quadratic model along the Cauchy
arc d (k) (t) defined as
d
where l (k) , u (k) and the projection operator P (x; l are defined component-wise
by l (k)
l (k)
The Cauchy arc (4.3) is continuous and piecewise linear, and the exact Cauchy
point is found by investigating the model behaviour between successive pairs
of breakpoints (points at which a trust region bound or a true bound is
encountered along the Cauchy arc), until the model starts to increase. The
variables which lie on their bounds at the Cauchy point (either a trust region
bound or a true bound) are then fixed.
ffl The second stage applies a truncated conjugate gradient method (in which an
11-band preconditioner is used), to further reduce the quadratic model by
changing the values of the remaining free variables.
The reader is referred to [8], Chapter 3, for a complete description of SBMIN.
We selected our 77 test examples as the majority of large and/or difficult nonlinear
unconstrained or bound-constrained test examples in the CUTE (see [1]) collection.
Only problems which took excessive cpu time (more than 5 hours), or excessive number
of iterations (more than 1500), were excluded, since it was not clear that they
would have added much to the results. All experiments were made in double pre-
cision, on a DEC 5000/200 workstation, using optimized (-O) Fortran 77 code and
DEC-supplied BLAS.
The values for the constants of Algorithm ITRR used in our tests are
0:25. The values for
have been inspired from the trust region update strategy used in [8].
Suitable values for the other constants have been determined after extensive testing.
(Note that, fortunately, slight variations for these constants have no significant impact
on the behaviour of Algorithm ITRR). We set meaning that at most one
move is allowed in the starting point, and 4, such that 5 radius estimates
(including the first one) are examined per starting point. These values result from a
compromise between the minimum number of radius estimates that should be sampled
to produce a reasonable ITRR, and the maximum number of extra function evaluations
which may amount to (imax
4.1. The quadratic case. Before introducing our results for the general non-linear
case, a preliminary study of LANCELOT's behaviour on quadratic problems is
presented in this section, that is intended to enlighten some of the characteristics of
the specific trust region method implemented there. This should be helpful to set up
a more adequate framework, in which a reliable interpretation of our testing for the
general nonlinear case will become possible.
When the objective function f in problem (4.1)-(4.2) is a quadratic function,
model (2.2) is identical to f (since exact second derivatives are used in (2.2)). The
region where this model should be trusted is therefore infinite at any stage of a trust
region algorithm. Hence, a logical choice for the ITRR in that case is \Delta
ever, when no particular choice is specified by the user for the ITRR, LANCELOT does
not make any distinction when solving a quadratic problem and sets \Delta
On the other hand, equations in (2.11) have no solution for
(which is the case if f is a quadratic). There-
fore, in order to circumvent this possibility, the next instruction has been added in
Algorithm ITRR (before (3.1) in Step 1):
If ae (0)
and go to Step 3.
Note that this test does not ensure that f is a quadratic. If needed, a careful strategy
should rather be developed to properly detect this special situation.
In order to compare both issues, we have tested quadratic problems from the
collection, using LANCELOT with \Delta and with Algorithm ITRR
in which (4.4) has been added (see LAN and ITRR, respectively, in the tables below).
Results are presented in Tables 1 and 2 for a representative sample of quadratic problems
(6 unconstrained and 6 bound-constrained). In these tables and the following
ones, n denotes the number of variables in the problem, "#its" is the number of major
iterations needed to solve the problem, "#cg" reports the number of conjugate
gradient iterations performed beyond the Cauchy point, and the last column gives the
cpu times in seconds. Note that, for all the tests reported in this section, only one
additional function evaluation has been needed by Algorithm ITRR to set \Delta
Table
A comparison for the unconstrained quadratic problems.
LAN ITRR LAN ITRR LAN ITRR LAN ITRR
TESTQUAD 1000
Table
1 shows that, as expected, an infinite choice is the best when f is a quadratic
function, and the problem is unconstrained. On the other hand, a substantial increase
in the number of conjugate gradient iterations is observed in Table 2 (except for
problem TORSIONF) when bound constraints are imposed, while the number of
major iterations decreases. At first glance, these results may be quite surprising, but
they closely depend on the LANCELOT package itself. This package includes a branch,
after the conjugate gradient procedure, that allows re-entry of this conjugate gradient
procedure when the convergence criterion (based on the relative residual) has been
satisfied, but the step computed is small relative to the trust region radius and the
model's gradient norm. This is intended to save major iterations, when possible. In
Table
A comparison for the bound-constrained quadratic problems.
LAN ITRR LAN ITRR LAN ITRR LAN ITRR
the absence of bound constraints, this avoids an early termination of the conjugate
gradient process, allowing attainment of the solution in a single major iteration (see
Table
1). In the presence of bounds however, these (possibly numerous) re-entries
may not be justified as long as the correct set of active bounds has not yet been
identified. This behaviour is detailed in Table 3 for a sequence of increasing initial
radii, and exhibits, in particular, a high sensitivity to a variation of the ITRR, which
is a rather undesirable feature.
Table
A comparison for a sequence of increasing initial trust region radii with LANCELOT.
Problemn Initial radius \Delta (0)
#its
time
#its
OBSTCLAL 1024 #cg 48 55 70 76 93 117
time 14.64 15.73 18.52 18.28 20.97 24.75
#its 4 3 3 3 3 3
time 100.63 5.12 5.27 5.39 5.37 5.51
For comparison purposes, Tables 4 and 5 present the results when removing the
aforementioned branch in LANCELOT. This time, an infinitely large value for the
ITRR is justified. The conjugate gradient and timing results for Algorithm ITRR are
much closer to those of LANCELOT in Table 4 than in Table 2, with a slightly better
performance for problem OBSTCLAE and a slightly worse performance for problem
JNLBRNG1 (even though a clear improvement occurred due to the branch removal).
For problem JNLBRNG1 (as for others in our test set), a limited trust region acts as
an extra safeguard to stop the conjugate gradient when the correct active set is not
yet detected. This effect of the trust region may be considered as an advantage of
trust region methods.
In order to complete the above analysis, we now consider problem TORSIONF
in
Table
2. This problem is characterized by a very large number of active bounds
at the optimal solution, while most of the variables are free at the starting point.
Because of the very small ITRR, the identification process of the correct active set
during the Cauchy point determination is hindered. That is, during the early major
iterations, the trust region bounds are all activated at the Cauchy point, without
any freedom left for the conjugate gradient procedure. When the trust region has
been slightly enlarged, besides trust region bounds, some of the true bounds are also
identified by the Cauchy point, but much fewer than the number that would be the
case if the trust region was large enough. That is, the conjugate gradient procedure
in LANCELOT is restarted each time a true bound is encountered (which occurs at
almost every conjugate gradient iteration), in order to maintain the conjugacy between
the directions, and the iteration is stopped only when a trust region bound is
reached. All this produces extra linear-algebraic costs that greatly deteriorates the
algorithm's performance. On the other hand, when starting with a large ITRR, a
good approximation of the correct active set is immediately detected by the Cauchy
point, and very little work has to be performed during the conjugate gradient process.
This observation strengthens the priority given to a large choice for the ITRR, when
possible.
Table
A comparison for the bound-constrained quadratic problems (modified version).
Problemn #its #cg time
LAN ITRR LAN ITRR LAN ITRR
Table
A comparison for a sequence of increasing initial trust region radii with LANCELOT (modified ver-
sion).
Problemn Initial radius \Delta (0)
#its 9 6 6 6
time 33.43 30.38 30.57 30.33
#its 8 8 8 8
time 14.21 14.24 14.28 14.19
#its 5 4 4 4
time 102.02 5.66 5.61 5.55
In the light of the above analysis, we tested the 77 nonlinear problems with the
original version of LANCELOT versus a modified version, where the extra feature to
improve a too small step on output of the conjugate gradient process has been ignored.
Slight differences in the results have generally been observed, that were more often
in favour of the modified version. For this reason and in order to avoid an excessive
sensitivity of the method to the trust region size as well as preventing a large choice
for the ITRR (especially when this choice reflects a real adequacy between f and its
model), we decided to use the modified version for the testing of the nonlinear case
presented in the next section.
4.2. The general case. In order to test Algorithm ITRR, we ran LANCELOT
successively
ffl Algorithm ITRR, starting with \Delta (0)
computed by LANCELOT when no other choice
is made by the user);
(the distance to the unconstrained
Cauchy point, as suggested by Powell in [13]), except when the
quadratic model is indefinite, in wich case we omitted the test.
The detailed results are summarized in Tables 6 and 7 for the 64 unconstrained
problems (possibly including some fixed variables), and in Table 8 for the 13 bound-
constrained problems (see ITRR, LAN and CAU, respectively). For each case, the
number of major iterations ("#its") and the cpu times in seconds ("time") are re-
ported. Note that the number of function evaluations may then be easily deduced :
for LANCELOT without Algorithm ITRR, it is the number of major iterations plus 1,
while for LANCELOT with Algorithm ITRR, it is the number of major iterations plus
12 if the starting point is refined once (what is pointed out by an asterisk in the first
column), and plus 6 otherwise. The tables also present the relative performances for
the number of function evaluations, the number of major iterations and the cpu times
(see "%f", "%its", and "%time", respectively), computed as
\Theta 100 and
\Theta 100;
where "?" is, in turn, the number of function evaluations, "#its", and "time". In
these tables, a "+" indicates when the performance is in favour of Algorithm ITRR
and a "\Gamma" when not. Note that a difference of less than five percent in the cpu times
is regarded as insignificant.
The results first show that, all in all, Algorithm ITRR improves the overall cpu
time performance of LANCELOT for a large number of problems:
improvements against 13 deteriorations and ties when comparing with
improvements against 19 deteriorations and 12 ties when comparing with
CAU.
More importantly, when they exist, these improvements may be quite significant (19
of them are greater than 30% when comparing with LAN, while 21 of them are greater
than 30% when comparing with CAU), and confirm the impact the ITRR choice may
have on the method behaviour. On the other hand, the damage is rather limited
when it occurs (except for a few cases). Note that the larger number of improvements
observed when comparing with LAN does not mean that the ITRR computed by
LANCELOT is worse than the distance to the unconstrained Cauchy point. Actually,
the improvements when comparing Algorithm ITRR with CAU are generally more
significant, and the results show that, on average, LAN and CAU may be considered
as equivalent (when compared together).
As pointed out by the number of asterisks in the first column of Tables 6 to 8,
a change in the starting point occurs very often and makes a significant contribution
to the good performance observed. Columns 4 and 7 detail the relative extra cost in
terms of function evaluations produced by Algorithm ITRR. Note that, in the current
case where the starting point is refined once, the (fixed) extra cost incurs up to 11
extra function evaluations, which is quite high on average, compared with the total
number of function evaluations. However, considering the relative performance in the
cpu times, the extra cost is generally covered, sometimes handsomely, by the saving
produced in the number of major iterations (that is, when %its is positive, %time is
generally positive too). Only few cases produce a saving that just balances the extra
A comparison for the unconstrained problems.
Problemn ITRR LAN CAU ITRR LAN CAU
#its #its %f %its #its %f %its time time %time time %time
BROYDN7D 92 74 \Gamma39 \Gamma24 73 \Gamma41 \Gamma26 87.8 72.5 \Gamma21 71.1 \Gamma23
BRYBND 14
DQRTIC 1000 28 28 \Gamma17 0 28 \Gamma17 0 18.3 18.2 1 18.2 1
ERRINROS 59 67 \Gamma4 +12 68 \Gamma3 +13 2.9 3.2 +9 3.1 +6
GENROSE 1194 1290 +7 +7 1100 \Gamma10 \Gamma9 1023.8 1115.6 +8 920.3 \Gamma11
LIARWHD 14
LMINSURF 306 272 \Gamma16 \Gamma12 157 \Gamma101 \Gamma95 412.4 380.8 \Gamma8 279.7 \Gamma47
MSQRTALS
MSQRTBLS 31 34 \Gamma23 +9 6573.9 6925.5 +5
A comparison for the unconstrained problems (end).
Problemn ITRR LAN CAU ITRR LAN CAU
#its #its %f %its #its %f %its time time %time time %time
NONDIA 1000
POWER 1000
28 28 \Gamma17 0 28 \Gamma17 0 18.3 18.2 1
RAYBENDS 70 52 \Gamma55 \Gamma35
Table
A comparison for the bound\Gammaconstrained problems.
Problemn ITRR LAN CAU ITRR LAN CAU
#its #its %f %its #its %f %its time time %time time %time
4.3 5.6 +23 9.0 +52
QRTQUAD 118 173 +25 +32 315 +59 +63 11.9 16.2 +27 28.4 +58
function evaluations (see the problems for which %its ? 0 and 0 - %time ! 5), while
never a saving occurs which does not counterbalance the additional work. On the other
hand, when a deterioration occurs in the cpu times (%time ! 0), it is rarely due to
the extra cost of Algorithm ITRR exclusively (%its = 0). As a consequence, excepting
when functions are very expensive, the use of Algorithm ITRR may be considered
efficient and relatively cheap compared with the overall cost of the problem solution.
We have observed that only 4 problem tests terminated Algorithm ITRR using
update (3.3), while a successful maximal radius satisfying condition (3.1) was selected
in the 73 other cases. We also experimented with a simpler choice for fi (0)
when ae (0)
i is close enough to one and fi (0)
that resulted in a markedly
performance compared with that of Algorithm ITRR. This proves the necessity
of a sophisticated selection procedure for fi (0)
i , that allows a swift recovery from a bad
initial value for the ratio ae (0)
We conclude this analysis by commenting on the negative results of Algorithm
ITRR on problem TQUARTIC (see Table 7), when comparing with CAU, and on
problem LINVERSE (see Table 8), especially when comparing with LAN. For problem
TQUARTIC (a quartic), the ITRR computed by both LANCELOT and Algorithm
ITRR is quite small and prevents from doing rapid progress to the solution.
The trust region hence needs to be enlarged several times during the minimization
algorithm. On the other hand, the distance to the Cauchy point is sufficiently large
to allow solving the problem in one major iteration. For problem LINVERSE, the
ITRR selected by Algorithm ITRR corresponds to an excellent agreement between the
function and the model in the steepest descent direction. However, the starting point
produced by Algorithm ITRR, although reducing significantly the objective function
value, requires more work from the trust region method to find the solution. This is
due to a higher nonlinearity of the objective function in the region where this new
point is located and is, in a sense, just bad luck! When testing Algorithm ITRR
with this problem, the same ITRR as LANCELOT had been selected,
hence producing the same performance. On the other hand, we also tested a series
of slightly perturbed initial trust region radii, and observed a rapid deterioration of
the performance of the method. Problem LINVERSE is thus very sensitive to the
ITRR choice. Note that this sensitivity has been observed on other problems during
our testing, and leads to the conclusion that a good ITRR sometimes may not be a
sufficient condition to guarantee an improvement of the method.
We finally would like to note that no modification has been made in Algorithm
ITRR (nor a constrained Cauchy point for CAU has been considered), when solving the
bound-constrained problems reported in the paper. The purpose here was simply to
illustrate the proposed method on a larger sample than only unconstrained problems.
Of course, the author is aware that in the presence of bound constraints, a more
reliable version of Algorithm ITRR should include a projection of each trial point onto
the bound constraints.
We end this section by briefly commenting on the choice of the constants and
upper bounds on the iteration counters of Algorithm ITRR. Although a reasonable
choice has been used for the testing presented in this paper, a specific one could be
adapted depending on the a priori knowledge of a given problem. If, for instance,
function evaluations are costly, a lower value for the bounds imax and jmax could
be selected. Note however that imax should not be chosen excessively small, in order
to be fairly sure that condition (3.1) will be satisfied (unless this condition is suitably
relaxed by choosing the value of - 0 ). On the other hand, if the starting point is
known to be far away from the solution, it may be worthwhile to increase the value of
jmax, provided the function is cheap to evaluate. Improved values for the remaining
constants closely depend on a knowledge of the level of
nonlinearity of the problem.
5. Conclusions and perspectives. In this paper, we propose an automatic
strategy to determine a reliable ITRR for trust region type methods. This strategy
mainly investigates the adequacy between the objective function and its model in the
steepest descent direction available at the starting point. It further includes a specific
method for refining the starting point by exploiting the extra function evaluations
performed during the ITRR search.
Numerical tests are reported and discussed, showing the efficiency of the proposed
approach and giving additional insights to trust region methods for unconstrained and
bound-constrained optimization. The encouraging results suggest some direction for
future research, such as the use of a truncated Newton direction computed at the
starting point rather than the steepest descent direction for the search of an ITRR.
An adaptation of the algorithm for methods designed to solve general constrained
problems is presently being studied.
Acknowledgement
. The author wishes to thank an anonymous referee for suggesting
a comparison of Algorithm ITRR with the choice of setting the ITRR to the
distance to the Cauchy point (as in [13]). Thanks are also due to Andy Conn, Nick
Gould, Philippe Toint and Michel Bierlaire who contributed to improve the present
manuscript.
--R
CUTE: Constrained and Unconstrained Testing Environment
A trust region algorithm for nonlinearly constrained optimization
A trust region strategy for nonlinear equality constrained optimization
Global convergence of a class of trust region algorithms for optimization using inexact projections on convex constraints
Global convergence of a class of trust region algorithms for optimization with simple bounds
Numerical methods for unconstrained optimization and nonlinear equations
Practical Methods of Optimization: Unconstrained Optimization
A Fortran subroutine for solving systems of nonlinear algebraic equations
The conjugate gradient method and trust regions in large scale optimization
Towards an efficient sparsity exploiting Newton method for minimization
A trust region algorithm for equality constrained minimization: convergence properties and implementation
--TR
--CTR
Wenling Zhao , Changyu Wang, Value functions and error bounds of trust region methods, Journal of Applied Mathematics and Computing, v.24 n.1, p.245-259, May 2007
Stefania Bellavia , Maria Macconi , Benedetta Morini, An affine scaling trust-region approach to bound-constrained nonlinear systems, Applied Numerical Mathematics, v.44 n.3, p.257-280, February
Nicholas I. M. Gould , Dominique Orban , Philippe L. Toint, CUTEr and SifDec: A constrained and unconstrained testing environment, revisited, ACM Transactions on Mathematical Software (TOMS), v.29 n.4, p.373-394, December | initial trust region;trust region methods;numerical results;nonlinear optimization |
273573 | On the Stability of Null-Space Methods for KKT Systems. | This paper considers the numerical stability of null-space methods for Karush--Kuhn--Tucker (KKT) systems, particularly in the context of quadratic programming. The methods we consider are based on the direct elimination of variables, which is attractive for solving large sparse systems. Ill-conditioning in a certain submatrix A in the system is shown to adversely affect the method insofar as it is commonly implemented. In particular, it can cause growth in the residual error of the solution, which would not normally occur if Gaussian elimination or related methods were used. The mechanism of this error growth is studied and is not due to growth in the null-space basis matrix Z, as might have been expected, but to the indeterminacy of this matrix. When LU factors of A are available it is shown that an alternative form of the method is available which avoids this residual error growth. These conclusions are supported by error analysis and Matlab experiments on some extremely ill-conditioned test problems. These indicate that the alternative method is very robust in regard to residual error growth and is unlikely to be significantly inferior to the methods based on an orthogonal basis matrix. The paper concludes with some discussion of what needs to be done when LU factors are not available. | Introduction
A Karush-Kuhn-Tucker (KKT) system is a linear system
y
version of this paper was presented at the Dundee Biennial Conference in Numerical
Analysis, June, 1995 and the Manchester IMA Conference on Linear Algebra, July 1995.
R. Fletcher and T. Johnson
involving a symmetric matrix of the form
Such systems are characteristic of the optimization problem
subject to A T
in which there are linear equality constraints, and the objective is a quadratic func-
tion. The KKT system (1.1) represents the first order necessary conditions for a locally
minimum solution of this problem, and y is a vector of Lagrange multipliers (see [3]
for example). Problems like (1.3) arise in many fields of study, such as in Newton's
method for nonlinear programming, and in the solution of partial differential equations
involving incompressible fluid flows, incompressible solids, and the analysis of plates and
shells. Also problems with inequality constraints are often solved by solving a sequence of
equality constrained problems, most particularly in the active set method for quadratic
programming.
In (1.2) and (1.3), G is the symmetric n \Theta n Hessian matrix of the objective function,
A is the n \Theta m Jacobian matrix of the linear constraints, and m n. We assume that
A has full rank, for otherwise K would be singular. In some applications, A does not
immediately have full rank, but can readily be reduced to a full rank matrix by a suitable
transformation.
There are various ways of solving KKT systems, most of which can be regarded as
symmetry-preserving variants of Gaussian elimination with pivoting (see for example
Forsgren and Murray [4]). This approach is suitable for a one-off solution of a large
sparse KKT system, by incorporating a suitable data structure which permits fill-in in
the resulting factors. Our interest in KKT systems arises in a Quadratic Programming
(QP) context, where we are using the so-called null-space method to solve the sequence
of equality constrained problems that arise. This method is described in Section 2. An
important feature of QP is that the successive matrices K differ only in that one column
is either added to or removed from A. The null-space method allows this feature to be
used advantageously to update factors of the reduced Hessian matrix that arises when
solving the KKT system. However in this paper we do not consider the updaing issue, but
concentrate on the solution of a single problem like (1.3), but in a null-space context. In
fact the null-space method is related to one of the above mentioned variants of Gaussian
Elimination, and this point is discussed towards the end of Section 3.
In this paper we study the numerical stability of the null-space method when the
matrix K is ill-conditioned. This arises either when the matrix A is close to being rank
deficient or when the reduced Hessian matrix is ill-conditioned. It is well known however
that Gaussian elimination with pivoting usually enables ill-conditioned systems to be
solved with small backward error (that is the computed solution is the exact solution of
Stability of Null-Space Methods 3
a nearby problem). As Wilkinson [6] points out, the size of the backward error depends
only on the growth in certain reduced matrices, and the amount of growth is usually
negligible for an ill-conditioned matrix. Although it is possible for exponential growth
to occur (we give an example for a KKT system), this is most unlikely in practice. A
consequence of this is that if the computed solution is substituted into the system of
equations, a very accurate residual is obtained. Thus variants of Gaussian elimination
with pivoting usually provide a very stable method for solving ill-conditioned systems.
However this argument does not carry over to the null-space method and we indicate
at the end of Section 2 that there are serious concerns about numerical stability when A is
nearly rank deficient. We describe some Matlab experiments in Section 6 which support
these concerns. In particular the residual of the KKT system is seen to be proportional
to the condition number of A. We present some error analysis in Section 4 which shows
how this arises.
When LU factors of A are available, we show in Section 3 that there is an alternative
way of implementing a null-space method, which avoids the numerical instability. This is
also supported by Matlab experiments. The reasons for this are described in Section 5,
and we present some error analysis which illustrates the difference in the two approaches.
In practice, when solving large sparse QP problems, LU factors are not usually available
and it is more usual to use some sort of product form method. We conclude with some
remarks about what can be done in this situation to avoid numerical instability.
Null-Space Methods
A null-space method (see e.g. [3]) is an important technique for solving quadratic programming
problems with equality constraints. In this section we show how the method
can be derived as a generalised form of constraint elimination. The key issue in this
procedure is the formation of a basis for the null space of A. We determine the basis
in such a way that we are able to solve large sparse problems efficiently. When A is
ill-conditioned we argue that there is serious concern for the numerical stability of the
method.
The null space of A may be defined by
and has dimension when A has full rank. Any matrix
whose columns are a basis for N (A) will be referred to as a null-space matrix for A. Such
a matrix satisfies A T and has linearly independent columns. A general specification
for computing a null-space matrix is to choose an n \Theta (n \Gamma m) matrix V such that the
matrix
4 R. Fletcher and T. Johnson
is non-singular. Its inverse is then partitioned in the following way
n\Gammam
It follows from the properties of the inverse that A T . By construc-
tion, the columns of Z are linearly independent, and it follows that these columns form
a basis for N (A).
The value of this construction is that it enables us to parametrize the solution set of
the (usually) underdetermined system A T in (1.3) by
Here Y b is one particular solution of A T any other solution x differs from
Y b by a vector, Zv say, in N (A). Thus (2.2) provides a general way of eliminating the
constraints, by expressing the problem in terms of the reduced variables v. Hence if (2.2)
is substituted into the objective function of (1.3), we obtain the reduced problem
We refer to Z T GZ as the reduced Hessian matrix and Z T (GY b\Gammac) as the reduced gradient
vector (at the point sufficient condition for (2.3) to have a unique minimizer
is that Z T GZ is positive definite. In this case there exist Choleski factors Z T
and (2.3) can be solved by finding a stationary point, that is by solving the linear system
Then substitution of v into (2.2) determines the solution x of (1.3). The vector Gx \Gamma c is
the gradient of the objective function at the solution, so a vector y of Lagrange multipliers
satisfying can then be obtained from
by virtue of the property that Y T I. The vectors x and y also provide the solution
to (1.1) as can readily be verified.
In practice, when A is a large sparse matrix, the matrices Y and Z are usually
substantially dense and it is impracticable to store them explicitly. Instead, products
with Y and Z or their transposes are obtained by solving linear systems involving A.
For example the vector could be computed by solving the linear
system
A
by virtue of (2.1). Likewise solving the system
A
Stability of Null-Space Methods 5
provides the products u 1
t. These computations require an invertible
representation of the matrix A to be available.
Solving systems involving A is usually a major cost with the null-space method. To
keep this cost as low as possible, it is preferable to choose the matrix V to be sparse.
Other choices (for example based on the QR factors of A, see [3]) usually involve significantly
more fill-in and computational expense. In particular it is attractive to choose the
columns of V to be unit vectors, using some form of pivoting to keep A as well conditioned
as possible. In this case, assuming that the row permutation has been incorporated into
A, it is possible to write
"I
where A 1
is an m \Theta m nonsingular submatrix. Then (2.1) becomes
and provides an explicit expression for Y and Z. In particular we see that
We refer to this choice of V as direct elimination as it corresponds to directly using the
first m variables to eliminate the constraints (see [3]). We shall adopt this choice of V
throughout the rest of the paper.
The reduced Hessian matrix Z T GZ is also needed for use in (2.3), and can be calculated
in a similar way. The method is to compute the vectors Z T GZe k for
of the unit matrix I n\Gammam . The computation
is carried out from right to left by first computing the vector z by solving the
system
A T z
Then the product Gz k is computed, followed by the solution of
The partition u 2
is then column k of Z T GZ as required. The lower triangle of Z T GZ
is then used to calculate the Choleski factor L. A similar approach is essentially used
in an active set method for QP, in which the Choleski factor of Z T GZ is built up over
a sequence of iterations. (If indefinite QP problems are solved, it may be required to
solve KKT systems in which Z T GZ is indefinite. We note that such systems can also be
solved in a numerically stable way which preserves symmetry, see Higham [5] in regard
to the method of Bunch and Kaufmann [1]).
6 R. Fletcher and T. Johnson
An advantage of the null-space approach is that we only need to have available a
subroutine for the matrix product Gv. Thus we can take full advantage of sparsity or
structure in G, without for example having to allow for fill-in as Gaussian elimination
would require. The approach is most convenient when Z T GZ is sufficiently small to
allow it to be stored as a dense matrix. In fact there is a close relationship between
the null-space method and a variant of Gaussian elimination, as we shall see in the next
section, and the matrix Z T GZ is the same submatrix in both methods. Thus it would
be equally easy (or difficult) to represent Z T GZ in a sparse matrix format with either
method.
To summarize the content of this section we can enumerate the steps implied by (2.2)
through (2.5)
1. Calculate Z T GZ as in (2.10) and (2.11).
2. Calculate by a solve with A T as in (2.6) with
3. Calculate requiring a product with G.
4. Calculate u 2
by a solve with A as in (2.7).
5. Solve Z T
to determine v as in (2.4).
6. Calculate by a solve with A T as in (2.6).
7. Calculate requiring a product with G.
8. Calculate by a solve with A, which also provides z g.
When direct elimination based on (2.9) is used, we shall refer to this as Method 1. Step 1
requires m) solves with either A or A T and products with G to set up the
reduced Hessian matrix. The remaining steps require 4 solves and 2 products, plus a
solve with Z T GZ. In some circumstances these counts can be reduced. If
steps 2 and 3 are not required. If the multiplier part y of the solution is not of interest
then steps 7 and 8 are not needed.
We now turn to the concerns about the numerical stability of the null-space method
when A (and hence A 1
and is ill-conditioned. In this case A is close to a rank deficient
say, which has a null space of higher dimension. When we solve systems
like (2.10) and (2.11), the matrix Z that we are implicitly using is badly determined.
Therefore, because of round-off error, we effectively get a significantly different Z matrix
each time we carry out a solve. Thus the computed reduced Hessian matrix Z T GZ does
not correspond to any one particular Z matrix. As we shall see in the rest of the paper,
this can lead to solutions with significant residual error.
Stability of Null-Space Methods 7
3 Using LU factors of A
In this section we consider the possibility that we can readily compute LU factors of A
given by
is unit lower triangular and U is upper triangular. We can assume that a row
permutation has been made which enables us to bound the elements of L 1
and L 2
by
1. As we shall see, these factors permit us to circumvent the difficulties caused by
ill-conditioning to a large extent. (Unfortunately, LU factors are not always available,
and some indication is given in Section 7 as to what might be done in this situation.)
We also describe how the steps in the null-space method are changed. Finally we explore
some connections with Gaussian elimination and other methods, which provide some
insight into the likelihood of growth in Z.
A key observation is that if LU factors of A are available, then it is possible to express
Z in the alternative form
in which the UU \Gamma1 factors arising from (2.9) and (3.1) are cancelled out. A minor
disadvantage, compared to (2.9), is that L 2
is needed, which is likely to be less sparse
than A 2
and also requires additional storage. However if A is ill-conditioned, this is
manifested in U (but not usually L) being ill-conditioned, so that (3.2) enables Z to be
defined in a way which is well-conditioned. In calculating the reduced Hessian matrix it
is convenient to define
I
and replace equations (2.10) and (2.11) by
and
I
The steps of the resulting null-space method are as follows (using subscript 1 to denote
the first m rows of a vector or matrix, and subscript 2 to denote the last
1. Calculate Z T GZ as in (3.4) and (3.5).
2. Calculate s 1
8 R. Fletcher and T. Johnson
3. Calculate requiring a product with G.
4. Calculate u 2
5. Solve Z T
for v.
6. Calculate
7. Calculate
8. Calculate requiring a product with G.
9. Calculate
10. Calculate
In the above, inverse operations involving L 1
and U are done by forward or backward
substitution. The method is referred to as Method 2 in what follows. (For comparability
with Method 1, we have also included the calculation of the reduced gradient z, although
this would not normally be required.) Note that all solves with the n \Theta n matrix A
are replaced by solves with smaller m \Theta m matrices. Also steps 1, 4, 6 and 10 use the
alternative definition (3.2) of Z and so avoid a potentially ill-conditioned calculation with
A (or A 1
We consider the numerical stability of both Method 1 and Method 2 in more
detail in the next section.
In the rest of this section, we explore some connections between this method and some
variants of Gaussian elimination, and we examine the factored forms that are provided by
these methods. It is readily observed (but not well known) that there are block factors of
K corresponding to any null-space method in this general format. These are the factors
I
A T
(using blanks to denote a zero matrix). This result is readily verified by using the
equation I derived from (2.1). This expression makes it clear that
inverse representations of the matrices A and Z T GZ will be required to solve (1.1).
However these factors are not directly useful as a method of solution as they also involve
the matrices Y T GY and Y T GZ whose computation we wish to avoid in a null-space
method. Equation (3.6) also shows that K \Gamma1 will become large when either A or Z T GZ
is ill-conditioned, and we would expect the spectral condition number to behave like
A M where
When using direct elimination (2.8) we may partition K in the form
G 11
G 21
G 22
A
Stability of Null-Space Methods 9
When A has LU factors (3.1) then it is readily verified that another way of factorizing
K is given by6 4
G 11
G 21
G 22
A
I
Z U
U T7 56 4
where Z is defined by (3.2) and G 1
]. Note that the matrix U occurs on
the reverse diagonal of the middle factor, but that no operations with U \Gamma1 are required
in the calculation of the factors. Thus any ill-conditioning associated with U does not
manifest itself until the factors are used in solving the KKT system (1.1). If there is no
growth in Z then the backward error in (3.7) will be small, indicating the potential for
a small residual solution of the KKT system. We show in Section 5 how this can come
about. Another related observation is that if A is rank deficient, then the factors (3.6)
do not exist (since the calculation of Y involves A \Gamma1and hence U
be calculated without difficulty.
The factorization (3.7) of K is closely related to some symmetry preserving variants
of Gaussian elimination. Let us start by eliminating A 2
and the sub-diagonal elements
of A 1
by row operations. (As before we can assume that row pivoting has been used.)
The outcome of these row operations is that6 4
G 21
G 22
A
I
G 22
]. Note that these row operations are exactly those used by Gaussian
elimination to form (3.1). To restore symmetry in the factors, we repeat the above
procedure in transposed form, that is we make column operations on A Tand A T, which
gives rise to (3.7).
We can also interleave these row and column operations without affecting the final
result. If we pair up the first row and column operation, then the second row and
column operation, and so on, then we get the method of 'ba' pivots described by Forsgren
and Murray [4]. Thus these methods essentially share the same matrix factors. The
difference is that in the null-space method, Z T GZ is calculated by matrix solves with
A, as described in Section 2, whereas in these other methods it is obtained by row and
column operations on the matrix K.
This association with Gaussian elimination enables us to bound the growth in the
R. Fletcher and T. Johnson
factors of K. The bound is attained for the critical case typified by the matrix
for which Row operations with pivots in the (1,7), (2,8), (3,9) and
positions leads to the
Then column operations with pivots in the (7,1), (8,2), (9,3) and (10,4) positions gives
rise to 2
which corresponds to the middle factor in (3.7). In this case and the
corresponding matrix Z is given by
Stability of Null-Space Methods 11
In general it is readily shown that when m ! n, growth of 2 2m in the maximum modulus
element of K can occur. For the null-space method based on (3.2), this example also
illustrates the maximumpossible growth of 2 1. In practice however
such growth is most unlikely and it is usual not to get any significant growth in Z.
4 Numerical Stability of Method 1
In this and the next section we consider the effect of ill-conditioning in the matrix K on
the solutions obtained by null-space methods based on direct elimination. In particular
we are interested to see whether or not we can establish results comparable to those for
Gaussian elimination. We shall show that the forward error in x is not as severe as would
be predicted by the condition number of K. We also look at the residual errors in the
solution and show that Method 2 is very satisfactory in this respect, whereas Method 1
is not.
In order to prevent the details of the analysis from obscuring the insight that we
are trying to provide, we shall adopt the following simple convention. We imagine that
we are solving a sequence of problems in which either A or M (the spectral condition
numbers of A and increasing without bound. We then use the notation
O(h) to indicate a quantity that is bounded in norm by ckhk on this sequence, where
there exists an implied constant c that is independent of A or M , but may contain a
modest dependence on n. Also we shall assume that the system is well scaled so that
1. This enables us for example to deduce that multiplication of an
error bound O(") by A \Gamma1 causes the bound to be increased to O(A "). We also choose
to assume that the KKT system models a situation in which the exact solution vectors
x and y exist and are not unreasonably large in norm, that is
A similar assumption is needed in order to show that Gaussian elimination provides
accurate residuals, so we cannot expect to dispense with this assumption. Sometimes it
may be possible to argue that we are solving a physical problem which is known to have
a well behaved solution.
Another assumption that we make is that the choice of the matrix V in (2.8) (and
hence the partitioning of A) is made using some form of pivoting. Now the exact solution
for Z is given by
from (3.3), using the factors of A defined in (3.1). It follows that
where L is the spectral condition number of L. Assuming that partial pivoting is used,
so that jl ij j 1, and that negligible growth occurs in L \Gamma1, it then follows that negligible
growth occurs in Z and we can assert that
12 R. Fletcher and T. Johnson
Another consequence of this assumption is that we are able to neglect terms of O(L ")
relative to terms of O(A ") when assessing the propagation of errors for Method 2.
We shall now sketch some properties (Wilkinson [6]) of floating point arithmetic of
relative precision ". If a nonsingular system of n linear equations solved by
Gaussian elimination, the computed solution b
x is the exact solution of a perturbed system
referred to as the backward error. E can be bounded by an
expression of the form aeOE(n)" in which ae measures the growth in A during the elimination
and OE(n) is a modest quadratic in n. For ill-conditioned systems, and assuming that
partial pivoting is used, growth is rare and can be ignored. Also this bound usually
overstates the dependence on n which is unlikely to be a dominant factor. Hence for the
backward error
We can measure the accuracy of the solution either by the forward error b
x
or by computing the residual
x. Using
where A is some condition number of A. Since assuming that A " 1,
it follows that b
Likewise we can deduce that
These bounds are likely to be realistic and tell us that for Gaussian elimination, ill-conditioning
affects the forward error in x but not the residual r, as long as b
x is of
reasonable magnitude.
Wilkinson also gives expressions for the backward error in a scalar product and hence
in the product +Ax. The computed product b s is the exact product of a system in
which the relative perturbation in each element of b and A is no more than n" where n
is the dimension of x. We can express this as
if we make the assumption that b and A are O(1).
The first stage in a null-space calculation is the determination of Z T GZ, which we
denote by M . In Method 1, this is computed as in (2.10) and (2.11). In (2.10) a column
z k of the matrix Z is computed which, by applying (4.4), satisfies
Stability of Null-Space Methods 13
where A is the spectral condition number of A. The product with G introduces negligible
error, and the solution of (2.11) together with (4.5) shows that
Ab
Multiplying by L \Gamma1 and extracting the b
partition gives
using (4.7) and then (4.2). Hence we have established that
c
The argument has been given in some detail as it is important to see why the error in
M is O(A ") and not O( 2
A "). We also observe that hence that
c
We now turn to the solution of the KKT system using Method 1. We shall assume
that systems involving A and M are solved in such a way that (4.5) applies. Using (4.6),
and assuming that the computed quantities b s; b t; are O(1), the residual errors in
the sequence of calculations are then
Ab
c
A
y
z
These results, together with (4.8), may be combined to get the forward errors in the solution
vectors b
x and b
y. Multiplying through equations (4.9) and (4.13) by A \GammaT magnifies
the errors by a factor A (since we are assuming that A = O(1)), giving
We can get a rather better bound from (4.11) and (4.15) by first multiplying through by
using to give
14 R. Fletcher and T. Johnson
from the second partition of the solution. However the first partition of (4.15) gives
Combining (4.8) and (4.12) gives
We can now chain through the forward errors, noting that a product with Z or Z T does
not magnify the error in a previously computed quantity (by virtue of (4.2). However
the product M \Gamma1 b
in (4.21) magnifies the error in b
by a factor M and the product
in (4.20) magnifies the error in b g by a factor A . The outcome is that
and
A M "): (4.23)
As we would expect, the forward errors are affected by the condition numbers of A and
M . However although the condition number of K is expected to be of the order 2
A M ,
we see that this factor only magnifies the error in the y part of the solution, with the x
part being less badly affected.
When K is ill-conditioned we must necessarily expect that the forward errors are
adversely affected. A more important question is to ask whether the solution satisfies the
equations of the problem accurately. There are three measures of interest, the residuals
of the KKT system (1.1), and the reduced gradient
Gx is the negative gradient vector at the solution. We note that
the vector z is computed as a by-product of step 8 of Method 1.
If we compute r we obtain b
as in (4.6), and it follows from (4.13)
and the definition of A that A T b
When computing q we obtain
b z
from (4.14) and (4.15). Thus the accuracy of b
q depends on that of b z. From (4.19) and
it follows that
Stability of Null-Space Methods 15
from (4.17). (Notice that it is important not to use (4.22) here which would give an
unnecessary factor of M .) Then (4.8), (4.12), (4.11) and (4.10) can be used, giving
Thus we are able to predict under our assumptions that the reduced gradient b z and the
residual b q are adversely affected by ill-conditioning in A, but not by ill-conditioning in
M . However the residual b r is unaffected by ill-conditioning either in A or M .
Simulations are described in Section 6 which indicate that these error bounds reliably
predict the actual effects of ill-conditioning. Method 1 is seen to be unsatisfactory in
that an accurate residual q cannot be obtained when A is ill-conditioned. We shall show
in the next section that Method 2 does not share this disadvantage.
The main results of this section and the next are summarised and discussed in Section
7.
5 Numerical Stability of Method 2
In this section we assess the behaviour of Method 2 in the presence of ill-conditioning
in K. Although we cannot expect any improvement for the forward errors, we are able
to show that Method 2 is able to give accurate residuals that are not affected by ill-
conditioning. The relationship between Method 2 and Gaussian elimination described
towards the end of Section 3 gives some hope of proving this result. However this is not
immediate because Method 2 does not make direct use of the factors (3.7) in the same
way that Gaussian elimination does.
A fundamental difficulty with the analysis of Method 2 is that we can deduce from
(4.7) that
and this result cannot be improved if LU factors are available. To see this, we know that
the computed factors of any square matrix A satisfy
when there is no growth in b
U . If are the exact factors, it follows that
U
say, where Q is the strict lower triangular part of L
U \Gamma1 and R is the upper triangular
part. Because L
L is unit lower triangular and U b
U \Gamma1 is upper triangular we can deduce
that
involves an inverse operation
with b
U we can expect that b
L and L differ by O(A "). This result has been confirmed
by computing the LU factors of a Hilbert matrix in single and double precision Fortran.
On applying the result to our matrix A, it follows that (5.1) holds.
R. Fletcher and T. Johnson
Fortunately all is not lost because we are still able to compute a null-space matrix
which accurately satisfies the equation Z T
Z denote the null-space matrix
obtained from b
L in exact arithmetic. It follows that b
and hence from (5.2) that
We also have b
long as A " 1. Our analysis will express the errors that
arise in Method 2 in terms of b
Z rather than Z and this enables us to avoid the A factor
in the residuals.
The first step in Method 2 is to compute GZ as in (3.4) and (3.5). In this
section we denote c
Z as the value computed from b
Z in exact arithmetic and
use c c
M to denote the computed value of c
M . It readily follows, using results like (4.2),
that
c c
We now consider the solution of the KKT system using Method 2. As in equations
through (4.15) we assume that the computed quantities b s; b t; are O(1). Then
the residual errors in the sequence of calculations are
A Tb s 1
c c
It is readily seen from these equations that the forward errors will propagate in a similar
way to Method 1.
Turning to the residual errors, the computed value of the residual r is
from (5.10), (5.9), (5.5) and (5.3). When computing q we obtain b
for Method 1, and it follows from (5.12) that b q 1
O("). From (5.3) we can deduce that
so it follows that
Stability of Null-Space Methods 17
Thus the accuracy of the residual b
q depends on that of b z, as for Method 1. For b
z we can
use (5.13), (5.11), (5.10) and (5.9) to get
Now we can invoke (5.4) and (5.8) giving
from (5.7) and (5.6). Thus we have established under our assumptions that all three
measures of accuracy for the KKT system are O(") for Method 2 and are not affected by
ill-conditioning in either A or M . These results are again supported by the simulations
in the next section.
Figure
1. Condition numbers of K, A and L
6 Numerical Experiments
In order to check the predictions of Sections 4 and 5, some experiments have been carried
out on artifically generated KKT systems. These experiments have been carried out in
Matlab for which the machine precision is They suggest that the upper bounds
given by the error analysis accurately reflect the actual behaviour of an ill-conditioned
system. Another phenomenon that occurs when the ill-conditioning is very extreme is
also explained.
The KKT systems have been constructed in the following way. To make A ill-conditioned
we have chosen it as the first m columns of the n \Theta n Hilbert matrix, where
provides a sequence of problems for which the condition
number of A increases exponentially. Factors are calculated by the
Matlab routine lu which uses Gaussian Elimination with partial pivoting, and A is replaced
by PA. In the first instance the matrix G is generated by random numbers in
the range [\Gamma1; 1]. However to make positive definite, a multiple of the unit
matrix is added to the G 22
partition of G, chosen so that the smallest eigenvalue of M
R. Fletcher and T. Johnson
is changed to 10 1\Gammak for some positive integer k. The assumptions of the analysis require
that the KKT system has a solution that is O(1). To achieve this, exact solutions x and
y are generated by random numbers in [\Gamma1; 1], and the right hand sides c and b are
calculated from (1.1). For each value of m, 10 runs are made with a different random
number seed and the statistics are averaged over these 10 runs.
First of all we examine the effect of increasing the condition number of A whilst
keeping M well-conditioned. To do this we increase m from 2 up to 10, whilst fixing
1. The resulting condition numbers of K, A and L are plotted in Figure 1. It can
be seen that the slope of the unbroken line (K ) is about twice that of the dashed line
1, this is consistent with the estimate K 2
A M that we deduced
in Section 3. The condition number of L (dotted line) shows negligible increase, showing
that there is no growth in L \Gamma1, thus enabling us to assert that O(1). The levelling
out of the K graph for is due to round-off error corrupting the least
eigenvalue of K.
Figure
2. Error growth vs. A for Method 1 Figure 3. Error growth vs. A for Method 2
The effect of the conditioning of A on the different types of error is illustrated in
Figures
2 and 3. The forward error is shown by the two unbroken lines, the upper line
being the error in y and the lower line being the error in x. The upper line has a slope
of about 2 on the log-log scale, and the lower line has a slope of about 1, and both have
an intercept with the y-axis of about 10 \Gamma16 . This is precisely in accordance with (4.23)
and (4.22). It can also be seen that both methods exhibit the same forward error. The
computed value of the residual error shown by the dashed line and both
methods show the O(") behaviour as predicted by (4.24) and (5.14), with the increasing
condition number having no effect.
The difference between Methods 1 and 2 is shown by the computed values of the
residual (dotted line) and the reduced gradient
line). As we would expect from (4.27), these graphs are superimposed, and they clearly
show the influence of A on the error growth for Method 1, as predicted by (4.28).
Negligible error growth is observed for Method 2 as predicted by (5.16), except for an
Stability of Null-Space Methods 19
increase in q for A greater than about 10 9 . This feature is explained later in the section.
Figure
4. Error growth vs. M for Method 1 Figure 5. Error growth vs. M for Method 2
We now turn to see the influence of ill-conditioning in M on the errors. To do
this we fix carry out a sequence of calculations with
causes M to increase exponentially. Each calculation is the
average of ten runs as above. The results are illustrated in Figures 3 and 4, using the
same key. The forward errors are again seen to be the same for both methods and they
both have a slope of about 1 on the log-log scale, corresponding to the M factor in (4.22)
and (4.23). The upper line for the forward error in y lies about 10 5 units above that for
the forward error in x, as the extra factor of A in (4.23) would predict. The residual r is
seen to be unaffected by the conditioning of M as above. The residual q and the reduced
gradient z are also unaffected by M , but the graphs for Method 1 lie above those for
Method 2, due to the A factor in (4.28). All these effects are in accordance with what
the error analysis predicts.
To examine the anomalous behaviour of q in Figure 3 in more detail, we turn to
a sequence of more ill-conditioned test problems obtained by using the last m columns
of the Hilbert matrix to define A. The results for Method 2 are illustrated in Figure
6 and the anomalous behaviour (dotted line) is now very evident. The reason for this
becomes apparent when it is noticed that it sets in when the forward error in y, and
hence the value of b
y, becomes greater than unity. This possibility has been excluded in
our error analysis by the assumption that b
O(1). The anomalous behaviour sets in
when 2
that is A ' (M ") \Gamma1=2 , or in this case A ' 10 8 , much as Figures 3
and 6 illustrate. For greater values of A there is a term O(b y") in the expression for b
indicating that the error is of the form 2
. The fact that this part of the graph of
q is parallel to the graph of the forward error in y supports this conclusion.
The above calculations have also been carried out using a Vandermonde matrix in
place of the Hilbert matrix and very similar results have been obtained.
R. Fletcher and T. Johnson
Figure
6. Error growth for Method 2 for a more ill-conditioned matrix
7 Summary and Discussion
In this paper we have examined the effect of ill-conditioning on the solution of a KKT
system by null-space methods based on direct elimination. Such methods are important
because they are well suited to take advantage of sparsity in large systems. However they
have often been criticised for a lack of numerical stability, particularly when compared
to methods based on QR factors. We have studied two methods: Method 1 in which
an invertible representation of A in (2.8) is used to solve systems, and Method 2 in
which LU factors (3.1) of A are available. We have presented error analysis backed up by
numerical simulations which, under certain assumptions on growth, provide the following
conclusions
ffl Both methods have the same forward error bounds, with b
A M ").
ffl Both methods give accurate residuals if A is well conditioned, even if M is ill-conditioned
gives an accurate residual
for Method 1.
ffl Both methods give an accurate residual if A is ill-conditioned.
These conclusions do indicate that Method 1 is adversely affected by ill-conditioning in
A, even though the technique for solving systems involving A is able to provide accurate
residuals. The reasons for this are particularly interesting. For example one might expect
that when A is ill-conditioned, then A \Gamma1 would be large and we might therefore expect
from (2.1) that Z would be large. In fact we have seen that as long as V is chosen
suitably, then growth in Z is very unlikely (the argument is similar to that for Gaussian
elimination). Of course if V is badly chosen then Z can be large and this will cause
Stability of Null-Space Methods 21
significant error. One might also expect that because the forward error in computing Z
is necessarily of order O(A "), it would follow that no null-space method could provide
accurate residuals.
The way forward, which is exploited in the analysis for Method 2, is that Method 2
determines a matrix b
Z for which b
O("). Thus, although the null-space is inevitably
badly determined when A is ill-conditioned, Method 2 fixes on one particular basis matrix
Z that is well behaved. This basis is an exact basis for an O(") perturbation to A.
Method 2 is able to solve this perturbed problem accurately. On the other hand Method 1
essentially obtains a different approximation to Z for every solve with A. Thus the
computed reduced Hessian matrix Z T GZ does not correspond accurately to any one
particular b
Z matrix.
In passing, it is interesting to remark that computing the factors
and defining
, also provides a stable approach, not so much because it avoids the
growth in Z (we have seen that this is rarely a problem), but because it also provides a
fixed null-space reference basis, which is an exact basis for an O(") perturbation to A.
In the context of quadratic programming, a common solution method for large sparse
systems is to use some sort of product form method (Gauss-Jordan, Bartels-Golub-Reid,
Forrest-Tomlin etc. It is not clear that such methods provide O(") solutions to the
systems involving A that are solved in Method 1 (although B-G-R may be stable in
this respect). However the main difficulty comes when the product form becomes too
unweildy and is re-inverted. If A is ill-conditioned, the refactorization of A is likely to
determine a basis matrix Z that differs by O(A ") from that defined by the old product
form. Thus the old reduced Hessian matrix Z T GZ would not correspond accurately to
that defined by the new Z matrix after re-inversion. The only recourse would be to
re-evaluate Z T GZ on re-inversion, which might be very expensive. Thus we do not see a
product form method on its own as being suitable. Our paper has shown that if a fixed
reference basis is generated then accurate residuals are possible. It is hoped to show how
this might be done in a subsequent paper by combining a product form method with
another method such as LU factorization.
--R
Parlett B.
Erisman A.
Practical Methods of Optimization
Newton methods for large-scale linear equality- constrained minimization
Stability of the Diagonal Pivoting Method with Partial Pivot- ing
The Algebraic Eigenproblem
--TR | null-space method;KKT system;ill-conditioning |
273575 | Spectral Perturbation Bounds for Positive Definite Matrices. | Let H and H positive definite matrices. It was shown by Barlow and Demmel and Demmel and Veselic that if one takes a componentwise approach one can prove much stronger bounds on $\lambda_i(H)/\lambda_i(H and the components of the eigenvectors of H and H than by using the standard normwise perturbation theory. Here a unified approach is presented that improves on the results of Barlow, Demmel, and Veselic. It is also shown that the growth factor associated with the error bound on the components of the eigenvectors computed by Jacobi's method grows linearly (rather than exponentially) with the number of Jacobi iterations required for convergence. | Introduction
. If the positive definite matrix H can be written as
where D is diagonal and A is much better conditioned than H then the eigenvalues
and eigenvectors of H are determined to a high relative accuracy if the entries of the
matrix H are determined to a high relative accuracy. This was shown by Demmel and
Veseli'c [2], building on work of Barlow and Demmel [1]. In this paper we strengthen
some of the perturbation bounds in [2], and present a unified approach to proving
these results. We also show that, just as conjectured in [2], the growth factor that
arises in the bound on the accuracy of the components of the eigenvectors computed
by Jacobi's method is linear rather than exponential.
We now give an outline of the paper and the main ideas in it and then define
the notation. In Section 2 we quickly reprove some of the eigenvalue and eigenvector
perturbation bounds from [2] in a perhaps more unified way and derive bounds on
the sensitivity of the eigenvalues to perturbations in any given entry of the matrix.
The main idea in this section is that the analysis is reduced to standard perturbation
theory if one can express additive perturbations as multiplicative perturbations. In
this respect our approach is similar to that of Eisenstat and Ipsen in [4], except
that they assume a multiplicative perturbation and then go on to derive bounds,
whereas we assume an additive perturbation, which we rewrite as a multiplicative
perturbation, before performing the analysis. Our results are the same as those in [4]
for eigenvalues, but not for eigenvectors. We briefly compare our approach to relative
perturbation bounds with those in [1, 2, 4] in Section 2.1. We also show that the
relative gap associated with an eigenvalue is a very good measure of the distance (in
the scaled norm) to the nearest matrix with a repeated eigenvalue.
In Section 3 we consider the components of the eigenvectors of a graded positive
definite matrix 1 . The key idea here is that if H is a graded positive definite matrix
and U is orthogonal such that H then U has a "graded"
Department of Mathematics, College of William & Mary, Williamsburg, VA 23187. e-mail:
na.mathias@na-net.ornl.gov. This research was supported in part by National Science Foundation
grant DMS-9201586 and much of it was done while the author was visiting the Institute for Mathematics
and its Applications at the University of Minnesota.
We say that the positive definite matrix H is graded if diagonal and A
is much better conditioned than H.
roy mathias
structure related to that of H and H 1 . 2 This fact can be systematically applied to
obtain component-wise perturbation bounds for the eigenvectors of "graded" positive
definite matrices and component-wise bounds on the accuracy of the eigenvectors
computed by Jacobi's method. The fact that the matrix of eigenvectors is ''graded''
has been observed in [1] and [2], however the results there were weaker than ours, and
these papers did not exploit this "graded" structure to any great extent. The basic
results on gradedness of eigenvectors are in Section 3.1 and the applications are in
Section 3.2.
Let Mm;n denote the space of m \Theta n real matrices , and let Mn j M n;n . For a symmetric
matrix H we let - 1 its eigenvalues, ordered
in decreasing order. For
its singular values. The only norm that we use is the spectral norm (or 2-norm)
and we denote it by k \Delta k, i.e., (X). When we say that a matrix has unit
columns we mean that its columns have unit Euclidean norm.
For a matrix or vector X, jXj denotes its entry-wise absolute value. For two
matrices or vectors X and Y of the same dimensions we use minfX; Y g to denote
their entry-wise minimum, and we use X - Y to mean that each entry of X is
smaller than the corresponding entry of Y . To differentiate between the component-wise
and positive semidefinite orderings we use A - B to mean that A and B are
A is positive semidefinite. We use E to denote a matrix of ones
and e to denote a column vector of ones-the dimension will be apparent from the
context.
In studying the perturbation theory of eigenvectors we use the two notions of the
relative gap between the eigenvalues that were introduced in [1], but we use different
notation. Given a positive vector - we define
and
relgap (-;
One similarity between the two relative gaps is that it is sufficient to take the minimum
either case. However, it is easy to see that relgap (-; i) is at
most 1, while relgap(-; i) can be arbitrarily large and that
then as we show at the end of the section
relgap (-
Unfortunately the result for the perturbation to relgap is more complicated, and
this sometimes complicates analysis and results involving relgap. (See [2, proof of
Proposition 2.6] for such an instance.)
2 By this we mean that both kD
are not much larger than 1, where D
and D 1 are diagonal matrices such that the diagonal elements of D
1. We use quotes because this not the usual definition of gradedness, but, none-the-less it is related
to the gradedness of H and H 1 .
perturbation bounds for positive definite matrices 3
It is not clear which relative gap one should use, or whether one should use both,
or perhaps the relative gap used in [4]. In [2] it was suggested that relgap(-(H); i)
is the appropriate measure of the relative gap between - i (H) and the rest of the
eigenvalues of H, and that relgap(oe(G); i) is the appropriate measure of the relative
gap between oe i (G) and the rest of the singular values of G. The eigenvector results in
Theorems 3.5 and 2.9 and Corollary 2.10 and the singular vector results in in Theorem
2.8 suggest that this is not the case.
Luckily, one is most interested in the relative gap when it is small and in this
case it doesn't make much difference which definition one chooses. For example, if
then one can check that
One can also check that the left hand inequality is always valid by a simple application
of the arithmetic-geometric mean inequality.
Let us now prove (1.1). Define f on (0; 1) 2 by
Then
relgap (-;
So in order to prove (1.1), it is sufficient to prove that for any - 1
which
we must have
Without loss of generality . The bound (1.3) implies that ~ - 1 - 2 . Since
~
Writing (1.6) as
~
~
one sees that f( ~ of as a function of ff 1 , and ff 2 , is minimized then
Substituting these values for ff 1 and ff 2 , substituting the
expressions (1.5) and (1.6) in (1.4) we see that it is sufficient to prove
4 roy mathias
or equivalently,
which is equivalent to
The left hand side of (1.9) is an increasing function of ffi and so in order to verify (1.9)
it is sufficient to verify it when ffi is as large as possible - that is when
Straight forward algebra shows that (1.9) holds with equality when one substitutes
this value of ffi . Thus we have verified (1.1). The bound (1.1) is a slight improvement
over [7, Proposition 3.3 equation (3.8)] in the case
2. A unified approach. In this section we give a unified approach to some of
the inequalities in [2] and [1]. This approach also allows one to bound the relative
perturbation in the eigenvalues of a positive definite matrix caused by a perturbation
in a particular entry.
The key idea in this section is to express the additive perturbation H + \DeltaH as
a multiplicative perturbation of H. Given a multiplicative perturbation of a matrix
it is quite natural that the perturbation of the eigenvalues and eigenvectors is also
multiplicative. It is then a small step from this multiplicative perturbation to the
component-wise perturbation bounds that we desire. There are two ways to write
\DeltaH as a multiplicative perturbation
and
possible choice of Y is H 1
.) If one wants to prove eigenvalue
inequalities it seems that both representations give the same bounds. If one uses the
representation (2.1) then Ostrowski's Theorem [6, Theorem 4.5.9] yields the relation
between the eigenvalues of H and H this is the route taken in [4]. We shall
use (2.2) and the monotonicity principle (Theorem 2.1) because the proofs are slightly
quicker. Demmel and Veseli'c [2] and Barlow and Demmel [1] used the Courant-Fisher
min-max representation of the eigenvalues of a Hermitian matrix to derive similar
results.
In Jacobi's method one encounters positive definite matrices
diagonal and A can be much better conditioned than
H. For this reason Demmel and Veseli'c assumed the matrices
D(\DeltaA)D with D diagonal to be the data in their work [2]. We consider a slightly
more general situation and just assume that H and H + \DeltaH are positive definite.
We consider this more general setting firstly to show that one can prove relative
perturbation bounds for positive definite matrices without assuming that the matrices
are graded and secondly because the results are slightly cleaner in the general case.
perturbation bounds for positive definite matrices 5
(For example, the statement of Theorem 2.9, which deals with the general case, is
cleaned than the statement of Corollary 2.10 which deals with the special case where
D is diagonal.) Lemma 2.2 allows us to derive their results as corollaries of ours.
Theorem 2.1. Monotonicity Principle [6, Corollary 4.3.3] Let A; B 2 Mn . If
The following lemma will be useful in applying our general results in special situations.
Lemma 2.2. Let H be positive definite and let \DeltaH be arbitrary. Let Y 2 Mn be
such that
Furthermore, if
Proof. Since Y Y
there must be an orthogonal matrix Q such that
Q. Thus
For the second part of the lemma take
2 and apply the first part. Then we
have
as required. 2
Note that if D is diagonal, as it will be in applications, then using
the notation of Lemma 2.2 we have
Our bounds are in terms of j while those of Demmel and Veseli'c in [2] are in terms
of the larger quantity kA They assumed that the diagonal elements of A
are all 1. This is not always necessary, though it is a good choice of A in that it
approximately minimizes kA We only assume that the diagonal elements
of A are 1 when it is necessary.
2.1. Eigenvalues and Singular Values. Our main eigenvalue perturbation
theorem is
Theorem 2.3. Let H; H +\DeltaH 2 Mn be positive definite and let
k.
Then
Proof. Write
2 . Since
6 roy mathias
we have
The monotonicity principle (Theorem 2.1) now gives the required bounds. 2
Using the second part of Lemma 2.2 we obtain a result that is essentially the
same as [2, Theorem 2.3]:
Theorem 2.4. Let
positive definite,
assume that D diagonal, and let
k. Then
As another corollary of the monotonicity principle we have a useful relation between
the diagonal elements of a positive definite matrix and its eigenvalues [2, Proposition
2.10]:
Corollary 2.5. Let Mn be a positive definite matrix and assume
that D is diagonal and that the main diagonal entries of A are all 1 while the main
diagonal entries of H are ordered in decreasing order. Then
Proof. Since -n (A)I - A - 1 (A)I it follows that -n (A)D 2 - DAD - 1 (A)D 2 .
The matrix D 2 is diagonal so its eigenvalues are its diagonal elements and these are
n. The result now follows from the monotonicity principle. 2
One would expect that the eigenvalues of H are more sensitive to perturbations in
some entries of H and less sensitive to perturbations in others. Stating the bound in
terms of
allows one to derive stronger bounds on the sensitivity
of the eigenvalues of H to a perturbation in any one of the entries (or two corresponding
off-diagonal entries) of H than if we had replaced j by
us assume the notation of the theorem.
is the unit n-vector with
ith component equal to 1. Suppose that that is a relative perturbation
of ffl in the jth main diagonal entry, then
and so
In fact, we can say more. and so, from the monotonicity
principle, we know than - i so the lower bound in (2.5) can
be taken as 1, and vice versa if ffl ! 0. If kA as is quite possible for
some values of j, then the bound (2.5) is much better than (2.3) with j replaced by
perturbation in entries ij and ji, then for any
perturbation bounds for positive definite matrices 7
Now taking
and so for
One may hope to prove a bound with j(A instead of of
2 . To
see that such a bound is not possible consider the case A = I. Then the off diagonal
elements of A \Gamma1 are 0, but clearly perturbing an off-diagonal element of A does change
the eigenvalues of DAD.
One can obtain similar bounds on the perturbation of the eigenvectors, singular
values and singular vectors caused by a perturbation in one of the elements of the
matrix. In the case of eigenvectors and singular vectors one can obtain norm-wise and
component-wise bounds. The bounds for singular values and singular vectors involve
a row of B is of full rank) rather than just one element
of the inverse (or pseudo-inverse).
2.2. Eigenvectors and singular vectors. Now let us see how this approach
gives norm-wise perturbation bounds for the eigenvectors of a graded positive definite
matrix in terms of the relative gap between the eigenvalues. Let H be positive definite.
Let U be an orthogonal matrix with jth column an eigenvector of H corresponding
to - j (H) and let be a diagonal matrix with ii element - i (H). Then
H, the first part of Lemma
2.2 implies that
u be an eigenvector of 1
2 . Then ~ is an eigenvector of H+ \DeltaH .
The vector is an eigenvector of H and so the norm-wise difference between
u and ~
u is
So to show that ~
u can be chosen such that ku \Gamma ~ uk is small we must show that - u can
be chosen to be close to e j . We do this in Lemma 2.6, which follows easily from the
standard perturbation theory given in [5, pp. 345-6].
We have used the fact that U is orthogonal in (2.8), and hence has norm 1, to
obtain a norm-wise bound on u \Gamma ~
u. In Section 3.2 we use the component-wise bounds
on U to derive a component-wise bound on u \Gamma ~
u.
Lemma 2.6. Let diagonal elements ordered in decreasing
order and assume that - j+1 be a symmetric matrix and let
8 roy mathias
fflX. Then for ffl sufficiently small - is distinct, and one
can choose -
u(ffl) to be an eigenvector of H(ffl) such that
and so
If we take
2 in Lemma 2.6 then one can see that the coefficient of ffl
on the right hand side of (2.11) is bounded by@ X
element of \Delta. Substituting j for k\Deltak from (2.7) we get
relgap(-;
?From (2.8) it follows that we have the same bound on ku \Gamma ~ uk.
We summarize the argument in the following theorem:
Theorem 2.7. Let H 2 Mn be positive definite and let
k. Let
Assume that - j (0) is a simple eigenvalue
of H. Let u be a corresponding unit eigenvector of H. Then, for sufficiently small ffl,
there is an eigenvector u(ffl) of H(ffl) corresponding to - j (ffl) such that
relgap(-;
As mentioned earlier, we may replace j by kA \Gamma1 kk\DeltaAk. The resulting bound
improves that in [2, Theorem 2.5] by a factor of
Eisenstat and Ipsen also give a bound on the perturbation of eigenvectors which
involves a relative gap [4, Theorem 2.2]. Their bound relates the eigenvectors of H
and those of KHK T , where K 2 Mn is non-singular. It is an absolute bound - not
a first order bound. To obtain a bound of the form (2.12) from [4, Theorem 2.2] one
must find a bound on k(H of the form
It is shown in [9] that if (2.13) is to hold for all n \Theta n H and \DeltaH with H positive
definite then the constant c must depend on n and must grow like log n. That is, a
direct application of [4, Theorem 2.2] to the present situation does not yield (2.12).
However one can derive (2.12) using the idea behind the proof of [4, Theorem 2.2]
and a more careful argument [3].
perturbation bounds for positive definite matrices 9
Veseli'c and the author have used ideas similar to those in this section to prove a
non-asymptotic relative perturbation bound on the eigenvectors of a positive definite
matrix [13].
One can apply Lemma 2.6 to GG T and G T G and thereby remove the factor
from the bound on the perturbation of the right and left singular vectors
given in [2, Theorem 2.16]. Note that one must apply Lemma 2.6 directly in order to
obtain the strongest result. If one applies Theorem 2.7 to G T G the resulting bound
contains an extra factor Notice that the bound on the right and left
singular vectors is not the same - the bound on the right singular vectors is potentially
much smaller since relgap can be much larger than relgap .
Theorem 2.8. Let G; G+ \DeltaG 2 M m;n and let G y be the pseudo-inverse of G.
Assume that G is of rank ng and and that \DeltaG = \DeltaGG y G.
and assume that oe j (G) is simple.
Let u and v be left and right singular vectors of G corresponding to oe j (G). Then for
sufficiently small ffl, there are left and right singular vectors of G(ffl), u(ffl) and v(ffl)
corresponding to oe j (G(ffl)) such that
relgap (oe 2 (G);
Proof. Let U \SigmaV T be a singular value decomposition of G - here u and V are
square and \Sigma is rectangular. First let us consider the right singular vectors, which
are the eigenvectors of G T G.
and hence has norm at most 2k\DeltaGG y
Now from (2.11) one can choose ~ u a jth eigenvector of \Sigma T that differs in
norm from e j by at most
to first order in ffl. Hence, we have the same bound on ku \Gamma V ~
uk to first order in ffl.
The vector V ~
u is an eigenvector (corresponding to jth eigenvalue) of
which is equal to G T (ffl)G(ffl) up to O(ffl 2 ) terms. Since the jth singular value of G(ffl)
is simple, it follows that V ~ u is a right singular vector of of G(ffl) up to O(ffl 2 ).
Now let us consider the left singular vectors. As above we can show that
roy mathias
U and has norm at most j. So by (2.11) there is an eigenvector
of that differs from e j in norm by at most
to first order in ffl. In the same way as before we can now deduce that there is a vector
v(ffl) with this distance of v. 2
2.3. Distance to nearest ill-posed problem. It was shown in [1, Proposition
9] that relgap(-(H); i) is approximately the distance from H to the nearest matrix
with a multiple ith eigenvalue in the case that H is a scaled diagonally dominant
symmetric matrix and distances are measured with respect to the grading of H. We
show that there is a similar result for positive definite matrices. In Theorem 2.9 we
show that relgap(-(H); i) is exactly the distance to the nearest matrix with a repeated
ith eigenvalue when we use the norm
k. We strengthen [1,
Propostion 9] in Corollary 2.10 - our upper and lower bounds on the distance differ
by a factor of -(A) while those in [1, Proposition 9] differ by a factor of about - 4 (A),
a potentially large difference. Our bound is considerably simpler than that in [1], it
doesn't involve factors of n (although one could replace by n) and it's validity
doesn't depend on the value of the relative gap (the bound in [1] has the requirement
diagonal examples show that not every eigenvalue of H will have
the maximum sensitivity - \Gamma1
and so this difference in the upper and lower bounds
is to be expected. That is to say that one cannot hope to improve the bound (2.17)
by more than a factor of n. Our bound involves relgap while the bound in
[1] involves relgap. All these reasons suggest that relgap \Gamma1 , and not relgap \Gamma1 , is the
right measure of the distance to the nearest problem with a repeated ith eigenvalue.
Theorem 2.9. Let H be positive definite. Let - i (H) be a simple eigenvalue of
H, so that relgap (-
\DeltaH ) is a multiple eigenvalue of H
Then
Proof. First we show that ffi - relgap (-(H); i). Let \DeltaH be a perturbation
that attains the minimum in the definition of ffi . Then
k. Let
n. By Theorem 2.3 we know that
Since \DeltaH has a multiple ith eigenvalue there is an index j 6= i for which - 0
. By
(2.16) we must have
perturbation bounds for positive definite matrices 11
which implies
relgap (-(H); i) -
Now we show that Choose a value j such that
(One can easily show that this is possible.) Set
where x i and x j are unit eigenvectors of H corresponding to - i and - j . One can check
that \DeltaH). Because x i and x j are eigenvectors of H
Because x i and x j are orthogonal it follows that
as required. 2
Corollary 2.10. Let positive definite and assume that
D is diagonal and that the main diagonal entries of A are 1. Let - i (H) be a simple
eigenvalue of H, so that relgap (-
\DeltaH) is a multiple eigenvalue of H+ \DeltaHg:
Then
Proof. Because
and because, by Lemma 2.2, we have
it follows that
The result now follows from Theorem 2.9. 2
3. Eigenvector Components. It was shown in [1] that the eigenvectors of a
scaled diagonally dominant matrix are scaled in the same way as the matrix. Essentially
the same proof yields [2, Proposition 2.8]. We strengthen these by a factor -(A)
in Corollaries 3.2 and 3.3. In Section 3.2 we strengthen many of the results in [2]
by using the stronger results in Section 3.1, and show that the growth factor in the
error bound on the eigenvectors computed by Jacobi's method is linear rather than
exponential (Theorem 3.8). We also give improved component-wise bounds for the
perturbation of singular vectors (Theorems 3.6 and 3.7). It is essential that the D i
be diagonal in this section as we are considering the components of the eigenvectors.
roy mathias
3.1. Gradedness of eigenvectors. Here we give some simple results on the
"graded" structure of an orthogonal matrix that transforms one graded positive definite
matrix into another, and use this to derive results on the eigenvectors of a graded
positive definite matrix.
Lemma 3.1. Let H where the main diagonal
entries of the A i are 1 and the D i are diagonal. Assume that H 0 2 Mn and H 1 2 Mm
are positive definite. Then
Proof. It is easy to check that
Now, using the fact that the main diagonal entries of A 1 2 Mm are all 1 for the first
inequality, and the monotonicity principle (Theorem 2.1) applied to -n
1 for the second, we have
Taking square roots and dividing by - 1n the asserted bound. 2
If the matrix C is orthogonal then H
so we have a companion bound stated in the next result.
Corollary 3.2. Let H where the main
diagonal entries of the A i are 1 and the D i are diagonal. Assume that U is orthogonal.
Then
This says that if an orthogonal matrix U transforms H 0 into H 1 and -n (D \Gamma1
are not too small then U has a "graded" structure.
In the special case that U is the matrix of eigenvectors of
and we obtain
n:
It is useful to have bounds on the individual entries of U and we state a variety of
such bounds in (3.5-3.7), but note that they are actually weaker than the norm-wise
bounds in (3.4). The bounds (3.5-3.7) are stronger than those in [2, Proposition
2.8] and [1, Proposition 6] which have a factor - 3=2 (A) rather than - 1
2 (A) on the
right hand side. The result in [1] is however applicable to scaled diagonally dominant
symmetric matrices while our result is only for positive definite matrices.
Corollary 3.3. Let positive definite and assume that D
is diagonal and that the main diagonal entries of A are 1. Let U be an orthogonal
matrix such that diagonal with diagonal entries - i . Then
r
s
perturbation bounds for positive definite matrices 13
s
r
s
r
and the first inequality is stronger than the second and third.
Proof. The fact that ju ij j is no larger than the first (second) quantity on the right
hand side of (3.5) follows from the first (second) inequality in (3.4). The remaining
inequalities can be derived from (3.5) using the relations between the eigenvalues of H
and its main diagonal entries in Corollary 2.5. This also shows that they are weaker
than (3.5). 2
Another way to state the bound in (3.5) is
where the minimum is taken component-wise. Recall that E is the matrix of ones.
3.2. Applications of "graded" eigenvectors. Now we use the results in Section
3.1 to give another proof of the fact that components of the eigenvectors of a
graded positive definite matrix are determined to a high relative accuracy, then show
that relgap (-(H); i) is a good measure of the distance of a graded matrix from the
nearest matrix with a multiple ith eigenvalue, where the distance is measured in a
norm that respects that grading, and finally that Jacobi's method does indeed compute
the eigenvectors to this accuracy (improving on [2, Theorem 3.4]).
We now combine lemma 2.6 with the general technique used in Section 2 to obtain
a lemma that will be useful in proving component-wise bounds for eigenvectors and
singular vectors.
Lemma 3.4. Let diagonal elements ordered in decreasing
order and assume that - j+1 be a symmetric matrix and let U be
an orthogonal matrix.
be an eigenvector
of H j H(0) associated with - j . Let -
u be the upper bound on the jth eigenvector,
that is,
r
s
g:
Then, for ffl sufficiently small, - is simple, and one can choose u(ffl) to
be a unit eigenvector of H(ffl) corresponding to - j (ffl) such that
relgap (-;
Proof. Since U is the matrix of eigenvectors of H, the bound (3.5) gives
r
r
g:
Note that the vector -
u defined in the statement of the theorem is just the jth column
of the matrix -
U just defined in (3.10). ?From Lemma 2.6 it follows that there is an
eigenvector -
u(ffl) such that
14 roy mathias
where r is the vector given by
is the element of X. Let
u(ffl). So
and we must now bound -
Ur. The ith element of -
Ur is
min
r
r
min
r
s
r
s
min
r
s
For the final equality note that the quantity
min
r
s
is now independent of k and is -
defined in the statement of this lemma. 2
This result gives component-wise perturbation bounds for eigenvectors and singular
vectors as simple corollaries.
Theorem 3.5. Let positive definite and let
and assume that
it is a simple eigenvalue of H. Let u be a corresponding unit eigenvector of H. Let - u
be the upper bound on the jth unit eigenvector, that is ,
r
s
g:
Then, for sufficiently small ffl, - j (ffl) is simple and there is a unit eigenvector u(ffl) of
H(ffl) corresponding to - j (ffl) such that
relgap (-;
perturbation bounds for positive definite matrices 15
Proof. Write
where U is the matrix of eigenvectors of H and
implies that j. The asserted bound now follows from Lemma 3.4. 2
Lemma 3.4 also yields a component-wise bound on singular vectors.
Theorem 3.6. Let m;n be of rank n and let
Assume that oe j is simple
and that v is a corresponding unit right singular vector. Let - v be the upper bound on
the jth right unit singular vector, that is
g:
Then, for sufficiently small ffl, oe j (ffl) is simple and there is a unit right singular vector
v(ffl) of G(ffl) corresponding to oe j (ffl) such that
relgap (oe
Proof. Let has orthonormal columns, \Sigma 2 Mn is
positive diagonal and V 2 Mn is orthogonal. We may write
is the pseudo-inverse of B. Note
that kFk - 2j. Since the jth singular value of G is simple the corresponding singular
vector is differentiable and so in particular, v(ffl), the jth singular vector of G(ffl) (and
eigenvector of G(ffl) T G(ffl)) and -
v, the jth eigenvector of V \Sigma T differ by
at most O(ffl 2 ). According to Lemma 3.4, we know that
relgap (oe
and hence the bound on v(ffl). 2
This improves [2, Proposition 2.20] in two ways. Firstly, our upper bound - v j is
smaller than that in [2] by a factor of about oe \Gamma1
n (B), and secondly, we have a factor
in the denominator while in [2] there is a factor oe 2
our bound
is smaller by a factor of about oe \Gamma2
(B). The latter difference arises because in [2] the
authors used the equivalent of Theorem 3.5 applied to G T G, whereas we use Lemma
3.4.
The quantity relgap (oe can be hard to deal with when one perturbs G, and
hence also its singular values. It would be more convenient to have relgap (oe;
the bound. It is easy to check that relgap (oe;
It is worth stating the stronger form of the inequality (3.11) as this is more natural
when G is the Cholesky factor of a positive definite H (as is the case in [12]) . In this
case oe 2
roy mathias
Because we have no component-wise bound on the left singular vectors of
we cannot get a component-wise bound on the difference between the left singular
vectors of BD and (B \DeltaB)D.
We now give a result on component-wise perturbations of singular vectors. Our
bound is stronger than [2, Propositions 2.19 and 2.20] by a factor of about oe \Gamma3
upper bound - v is smaller than than in [2] by a factor of oe \Gamma2
n (B), and the denominator
here contains a factor oe n (B) while that in [2] contained oe 2
(B)). We could give an
improved bound for eigenvectors also, but we restrict ourselves to the case of singular
vectors because that is what is important when one uses one sided Jacobi to compute
eigenvectors of a positive definite matrix to high component-wise relative accuracy.
Theorem 3.7. Let have rank n and assume that
are positive and that B has unit columns. Choose
ng and let v be a unit right singular vector corresponding to oe i (G). Let
\DeltaBD and set
Then
and there is a vector -
v that is a right singular vector of G+ \DeltaG such that
Proof. The statement that - v is an upper bound on v follows from (3.5). Let
t\DeltaG. The condition ensures that oe i (G(t)) is simple
so there is a differentiable v(t) that is a right singular vector of G(t) such
that and, from (3.12) we have the component-wise bound
dt v(t)
and B(t) has unit columns and D(t) is positive diagonal. So
for a bound on jv \Gamma - need only bound each of the quantities that
depend on t and then integrate the bound. Using the fact
one can can show that for t 2 [0; 1]
relgap (oe(G(t));
One can check that
The condition
necessarily less than 1. We use this in the final inequality in the display below.
perturbation bounds for positive definite matrices 17
Using (3.5) for the first inequality and bounds on oe j (G(t)); oe n (B(t)) and d i (t) for the
subsequent inequalities we have
Substituting these bounds into (3.14) gives
dt
which when integrated yields the asserted inequality. 2
In right handed Jacobi one computes the singular values of G 0 2 Mn by generating
a sequence G is an orthogonal matrix chosen to orthogonalize two
columns of G i . One stops when
One can obtain the right singular vectors of G by accumulating the J i . Demmel and
Veseli'c show in [2, Theorem 3.4] that when implemented in finite precision arithmetic
this algorithm gives the individual components of the eigenvectors to a high accuracy
relative to their upper bounds (actually this is for two sided Jacobi, but the proof is
essentially the same for 1 sided Jacobi). However, their bound contains a factor for
which they say "linear growth is far more likely than exponential growth". In the
next result we show that the growth is indeed linear. One can prove an analogous
result for two sided Jacobi applied to a positive definite matrix.
Let us denote the product J i J by J i:k . The key idea that allows us
to derive a growth factor that is linear in M rather than exponential in M is that
we bound J i:k directly, rather than bound it by jJ i:k j - jJ i jjJ
bounding each of the terms on the right hand side. This idea has been used profitably
in [11] also.
Theorem 3.8. Let G has unit columns
and D i is diagonal. Assume that
where J i is orthogonal and
Assume further that the columns of GM are almost orthogonal in the sense that GM
satisfies (3.15) with tolerance tol. Let
and assume that
be the computed column
of J 0:M \Gamma1 corresponding to oe j (G). Then there is a unit right singular vector u T of G
roy mathias
corresponding to oe j (G) such that, to first order in j; ffl; and tol,
oe \Gamma2
min
relgap (oe(G);
where
is the upper bound on u from (3.5).
The bound (3.16) is a first order bound. The proof below would also yield a
bound that takes into account all the higher order terms, but the resulting inequality
would be much more complicated and probably not any more useful.
If the G i and J i arise from right handed Jacobi applied to G in finite precision
arithmetic with precision ffl then one can take Theorem 4.2 and the
ensuing discussion].
Let us compare our bound with
relgap (oe(G);
min
which is essentially the bound on the computed right singular values from [2, Theorem
4.4] stated in our notation. Our bound is stronger in several respects. The term
q(M; n) is a growth factor that is exponential in M , while our bound is linear in M .
As we have mentioned earlier, the upper bound vector - u in (3.17) is larger than -
u by
a factor of about oe \Gamma2
n (B), which could be quite significant. Also, we have two terms,
one in oe \Gamma2
min and the other in (oe min \Delta relgap (oe(G); these quantities are less
than (oe 2
min relgap (oe(G); which occurs in (3.17).
A weakness of both bounds is that they contain the factor oe \Gamma1
min (defined in the
statement of the theorem) rather than oe
n (B). It has been observed experimentally
[2, Section 7.4] that oe min =oe n (B) is generally 1 or close to 1, but no rigorous proof of
this fact is known. Mascarenhas has shown that the ratio can be as large as n=4 [8].
One can also see that for a given ffl we can take tol, the stopping tolerance, as large
as ffloe \Gamma1
significantly increasing the right hand side of (3.16). Typically, it
is suggested that one take tol to be a modest multiple of ffl when one wants to compute
the eigenvectors or eigenvalues to high relative accuracy [2]. Thus this larger value of
tol may be useful in practice to save little computation through earlier termination.
The rest of the paper is devoted to the rather lengthy proof of this theorem.
Proof. The outline of the proof is as follows. First we will bound ju \Gamma -
uj where u
is the value of the jth column of J 0:M \Gamma1 computed in exact arithmetic. Next we will
is for true) is an exact singular vector of G associated
with oe j (G). The inequality (3.16) follows by combining these two bounds. Through
out we will use the facts that oe j (GM
min and
Now consider
uj. This bound depends only on the scaling of the J i:k and
is independent of relgap (oe(G); j). If X;Y 2 Mn are multiplied in floating point
arithmetic with precision ffl the result is XY
Using this fact one can show by induction that
(3.
perturbation bounds for positive definite matrices 19
is the error in multiplying J 0:i\Gamma1 and J i and
is the first order effect of this error in the computed value of J 0:M . Taking
absolute values in (3.18) gives the component-wise error bound
Now
(D 0 jJ
noe
noe
noe
noe
min E:
Recall that E denotes the matrix of ones. We have used the first term in (3.8) and the
fact that, up to first order, J i+1:M \Gamma1 diagonalizes G T
for the first inequality,
(3.2) twice for the third. Since GM has orthogonal columns, up to O(tol), and the
singular values of GM are the same as those of G to O(j) it follows that DM = \Sigma, at
least to first order. So, multiplying by D 0 and \Sigma \Gamma1 , we have, to first order
min fflE:
In the same way, we obtain the first order bound
min fflE:
These two bounds can be combined to give
min
where the minimum is taken component-wise. (Note that D D.) The jth column
of this is the inequality we desire:
u:
This completes the first step.
Now let us bound the error between u and a singular vector of G. If the columns of
GM were orthogonal then in particular e j would be a right singular vector associated
with oe j (GM ). If in addition all the \DeltaG i were 0 then
would be a right singular vector associated with oe j (G). Neither of these
hypotheses is true, though in each case they are 'almost true' and so u is close to
being a singular vector of G 0 . We now bound he difference.
First we will consider the fact that the columns of GM are not exactly orthogonal.
Write
Then each entry of A is at most tol in absolute value and so tol. The equation
(3.20) implies that there is an orthogonal matrix Q such that
One can check that
roy mathias
we will use this bound later.
Now consider the effect of the \DeltaG i . It is easy to check, by induction for example,
that
where
Now, using the assumption k\DeltaG
for the first inequality and (3.2) for the
second we have
Together with (3.21) this yields
Now we will combine these two results to show that GM + \DeltaG has a right singular
vector close to e j and hence that G
0:M \Gamma1 has a right singular vector
close to . The right singular vectors of GM + \DeltaG are the same as those
of Q(GM is the orthogonal matrix introduced after equation (3.20).
Also,
The jth right singular vector of DM is e j and
So by Theorem 3.7 there is a right singular vector v of G+ \DeltaG corresponding to its
jth singular value such that
where
Now let us drop the second order terms in (3.22) and - v. The term - is
O(tol)oe 1 (BM ) so we may drop all first order terms in - v and in the ratio in (3.22).
In particular we may replace oe n (BM
all by 1. We do not
perturbation bounds for positive definite matrices 21
assume that relgap (oe(G M ); j) is large with respect to j and - so we cannot replace
relgap (oe(G M ); However, as was shown at the end of
the introduction, we have
relgap (oe(G M );
min
and hence
relgap (oe(G M );
these substitutions we obtain the bound that is
equivalent to (3.22) up to first order
relgap (oe(G M );
where
For convenience, let the coefficient of - v in (3.23) be denoted by c.
construction it is a right singular vector of G 0 corresponding
to oe j (G 0 ). Now we can complete the proof by bounding
We have used a slight generalization of (3.5) for the penultimate inequality and have
dropped second order terms in the last inequality. Similarly,
and so
Now combine the bound on ju \Gamma u
Acknowledgment
I thank the referee whose detailed comments, including
the observation that grading is not necessary for relative perturbation bounds, have
greatly improved the presentation of the results in this paper.
--R
Computing accurate eigensystems of scaled diagonally dominant
Jacobi's method is more accurate than QR.
personal communication.
Relative perturbation techniques for singular value problems.
Matrix Computations.
Matrix Analysis.
Relative perturbation theory: (I) eigenvalue and singular value variations.
A note on Jacobi being more accurate then QR.
A bound for the matrix square root with application to eigenvector perturbation.
Matrix Anal.
Accurate eigensystem computations by Jacobi methods.
Instability of parallel prefix matrix multiplication.
Fast accurate eigenvalue computations for graded positive definite matrices.
A relative perturbation bound for positive definite matrices.
--TR | jacobi's method;symmetric eigenvalue problem;error analysis;perturbation theory;positive definite matrix;graded matrix |